2026-04-02 00:00:07.764147 | Job console starting 2026-04-02 00:00:07.793108 | Updating git repos 2026-04-02 00:00:07.890468 | Cloning repos into workspace 2026-04-02 00:00:08.240092 | Restoring repo states 2026-04-02 00:00:08.267335 | Merging changes 2026-04-02 00:00:08.267357 | Checking out repos 2026-04-02 00:00:08.905348 | Preparing playbooks 2026-04-02 00:00:09.876593 | Running Ansible setup 2026-04-02 00:00:17.806226 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-04-02 00:00:19.736127 | 2026-04-02 00:00:19.736242 | PLAY [Base pre] 2026-04-02 00:00:19.791693 | 2026-04-02 00:00:19.791827 | TASK [Setup log path fact] 2026-04-02 00:00:19.846107 | orchestrator | ok 2026-04-02 00:00:19.878048 | 2026-04-02 00:00:19.878163 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-04-02 00:00:19.998341 | orchestrator | ok 2026-04-02 00:00:20.027084 | 2026-04-02 00:00:20.027199 | TASK [emit-job-header : Print job information] 2026-04-02 00:00:20.148499 | # Job Information 2026-04-02 00:00:20.148638 | Ansible Version: 2.16.14 2026-04-02 00:00:20.148668 | Job: testbed-deploy-current-in-a-nutshell-with-tempest-ubuntu-24.04 2026-04-02 00:00:20.148696 | Pipeline: periodic-midnight 2026-04-02 00:00:20.148715 | Executor: 521e9411259a 2026-04-02 00:00:20.148732 | Triggered by: https://github.com/osism/testbed 2026-04-02 00:00:20.148775 | Event ID: bbda5b98705b4c80ad0d6f19cf73b128 2026-04-02 00:00:20.154506 | 2026-04-02 00:00:20.154597 | LOOP [emit-job-header : Print node information] 2026-04-02 00:00:20.523488 | orchestrator | ok: 2026-04-02 00:00:20.523674 | orchestrator | # Node Information 2026-04-02 00:00:20.523707 | orchestrator | Inventory Hostname: orchestrator 2026-04-02 00:00:20.523729 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-04-02 00:00:20.523749 | orchestrator | Username: zuul-testbed04 2026-04-02 00:00:20.523767 | orchestrator | Distro: Debian 12.13 2026-04-02 00:00:20.523787 | orchestrator | Provider: static-testbed 2026-04-02 00:00:20.523835 | orchestrator | Region: 2026-04-02 00:00:20.523857 | orchestrator | Label: testbed-orchestrator 2026-04-02 00:00:20.523875 | orchestrator | Product Name: OpenStack Nova 2026-04-02 00:00:20.523892 | orchestrator | Interface IP: 81.163.193.140 2026-04-02 00:00:20.551027 | 2026-04-02 00:00:20.551134 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-04-02 00:00:22.057341 | orchestrator -> localhost | changed 2026-04-02 00:00:22.065064 | 2026-04-02 00:00:22.065183 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-04-02 00:00:24.361414 | orchestrator -> localhost | changed 2026-04-02 00:00:24.382762 | 2026-04-02 00:00:24.382897 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-04-02 00:00:25.175754 | orchestrator -> localhost | ok 2026-04-02 00:00:25.184975 | 2026-04-02 00:00:25.185079 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-04-02 00:00:25.232499 | orchestrator | ok 2026-04-02 00:00:25.255385 | orchestrator | included: /var/lib/zuul/builds/7d3b30e5ba19432986235cf5def78ef7/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-04-02 00:00:25.261727 | 2026-04-02 00:00:25.261830 | TASK [add-build-sshkey : Create Temp SSH key] 2026-04-02 00:00:27.019775 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-04-02 00:00:27.019964 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/7d3b30e5ba19432986235cf5def78ef7/work/7d3b30e5ba19432986235cf5def78ef7_id_rsa 2026-04-02 00:00:27.019997 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/7d3b30e5ba19432986235cf5def78ef7/work/7d3b30e5ba19432986235cf5def78ef7_id_rsa.pub 2026-04-02 00:00:27.020020 | orchestrator -> localhost | The key fingerprint is: 2026-04-02 00:00:27.020042 | orchestrator -> localhost | SHA256:ltOU51Q+Y+Csr4uujTCkeT2V9Ub1c9XhrBt9mph36jg zuul-build-sshkey 2026-04-02 00:00:27.020062 | orchestrator -> localhost | The key's randomart image is: 2026-04-02 00:00:27.020090 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-04-02 00:00:27.020110 | orchestrator -> localhost | | . o.+| 2026-04-02 00:00:27.020128 | orchestrator -> localhost | | + =o.o| 2026-04-02 00:00:27.020145 | orchestrator -> localhost | | + * =+o| 2026-04-02 00:00:27.020162 | orchestrator -> localhost | | * B .ooo| 2026-04-02 00:00:27.020179 | orchestrator -> localhost | | . S o +o ..| 2026-04-02 00:00:27.020199 | orchestrator -> localhost | | + . o . o ooo.| 2026-04-02 00:00:27.020216 | orchestrator -> localhost | | o + o +.+ .| 2026-04-02 00:00:27.020233 | orchestrator -> localhost | | . o + . .Eo o | 2026-04-02 00:00:27.020250 | orchestrator -> localhost | | oo+ o..oo | 2026-04-02 00:00:27.020267 | orchestrator -> localhost | +----[SHA256]-----+ 2026-04-02 00:00:27.020308 | orchestrator -> localhost | ok: Runtime: 0:00:00.674302 2026-04-02 00:00:27.026145 | 2026-04-02 00:00:27.026230 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-04-02 00:00:27.077492 | orchestrator | ok 2026-04-02 00:00:27.097274 | orchestrator | included: /var/lib/zuul/builds/7d3b30e5ba19432986235cf5def78ef7/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-04-02 00:00:27.133336 | 2026-04-02 00:00:27.133431 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-04-02 00:00:27.168372 | orchestrator | skipping: Conditional result was False 2026-04-02 00:00:27.174577 | 2026-04-02 00:00:27.174668 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-04-02 00:00:27.892669 | orchestrator | changed 2026-04-02 00:00:27.906171 | 2026-04-02 00:00:27.906262 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-04-02 00:00:28.251164 | orchestrator | ok 2026-04-02 00:00:28.289165 | 2026-04-02 00:00:28.289264 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-04-02 00:00:28.847266 | orchestrator | ok 2026-04-02 00:00:28.867563 | 2026-04-02 00:00:28.867674 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-04-02 00:00:29.432251 | orchestrator | ok 2026-04-02 00:00:29.437175 | 2026-04-02 00:00:29.437253 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-04-02 00:00:29.479861 | orchestrator | skipping: Conditional result was False 2026-04-02 00:00:29.485537 | 2026-04-02 00:00:29.485612 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-04-02 00:00:31.149843 | orchestrator -> localhost | changed 2026-04-02 00:00:31.172047 | 2026-04-02 00:00:31.172154 | TASK [add-build-sshkey : Add back temp key] 2026-04-02 00:00:32.279498 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/7d3b30e5ba19432986235cf5def78ef7/work/7d3b30e5ba19432986235cf5def78ef7_id_rsa (zuul-build-sshkey) 2026-04-02 00:00:32.279686 | orchestrator -> localhost | ok: Runtime: 0:00:00.023118 2026-04-02 00:00:32.285912 | 2026-04-02 00:00:32.285993 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-04-02 00:00:32.928971 | orchestrator | ok 2026-04-02 00:00:32.937633 | 2026-04-02 00:00:32.937729 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-04-02 00:00:32.980786 | orchestrator | skipping: Conditional result was False 2026-04-02 00:00:33.056775 | 2026-04-02 00:00:33.056896 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-04-02 00:00:33.558408 | orchestrator | ok 2026-04-02 00:00:33.595218 | 2026-04-02 00:00:33.595604 | TASK [validate-host : Define zuul_info_dir fact] 2026-04-02 00:00:33.622932 | orchestrator | ok 2026-04-02 00:00:33.628962 | 2026-04-02 00:00:33.629047 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-04-02 00:00:34.248549 | orchestrator -> localhost | ok 2026-04-02 00:00:34.254383 | 2026-04-02 00:00:34.254470 | TASK [validate-host : Collect information about the host] 2026-04-02 00:00:35.797433 | orchestrator | ok 2026-04-02 00:00:35.814995 | 2026-04-02 00:00:35.815102 | TASK [validate-host : Sanitize hostname] 2026-04-02 00:00:35.951737 | orchestrator | ok 2026-04-02 00:00:35.956145 | 2026-04-02 00:00:35.956224 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-04-02 00:00:37.114902 | orchestrator -> localhost | changed 2026-04-02 00:00:37.120176 | 2026-04-02 00:00:37.120260 | TASK [validate-host : Collect information about zuul worker] 2026-04-02 00:00:37.816485 | orchestrator | ok 2026-04-02 00:00:37.820706 | 2026-04-02 00:00:37.820810 | TASK [validate-host : Write out all zuul information for each host] 2026-04-02 00:00:39.383700 | orchestrator -> localhost | changed 2026-04-02 00:00:39.393321 | 2026-04-02 00:00:39.393413 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-04-02 00:00:39.689804 | orchestrator | ok 2026-04-02 00:00:39.700979 | 2026-04-02 00:00:39.701066 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-04-02 00:02:05.177756 | orchestrator | changed: 2026-04-02 00:02:05.179368 | orchestrator | .d..t...... src/ 2026-04-02 00:02:05.179438 | orchestrator | .d..t...... src/github.com/ 2026-04-02 00:02:05.179465 | orchestrator | .d..t...... src/github.com/osism/ 2026-04-02 00:02:05.179488 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-04-02 00:02:05.179510 | orchestrator | RedHat.yml 2026-04-02 00:02:05.194015 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-04-02 00:02:05.194033 | orchestrator | RedHat.yml 2026-04-02 00:02:05.194086 | orchestrator | = 2.2.0"... 2026-04-02 00:02:17.820715 | orchestrator | - Finding latest version of hashicorp/null... 2026-04-02 00:02:17.836530 | orchestrator | - Finding terraform-provider-openstack/openstack versions matching ">= 1.53.0"... 2026-04-02 00:02:18.245449 | orchestrator | - Installing hashicorp/local v2.7.0... 2026-04-02 00:02:18.775587 | orchestrator | - Installed hashicorp/local v2.7.0 (signed, key ID 0C0AF313E5FD9F80) 2026-04-02 00:02:18.834544 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-04-02 00:02:19.581538 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-04-02 00:02:19.675285 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-04-02 00:02:20.474437 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-04-02 00:02:20.474502 | orchestrator | 2026-04-02 00:02:20.474509 | orchestrator | Providers are signed by their developers. 2026-04-02 00:02:20.474514 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-04-02 00:02:20.474520 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-04-02 00:02:20.474527 | orchestrator | 2026-04-02 00:02:20.474613 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-04-02 00:02:20.474623 | orchestrator | selections it made above. Include this file in your version control repository 2026-04-02 00:02:20.474627 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-04-02 00:02:20.474631 | orchestrator | you run "tofu init" in the future. 2026-04-02 00:02:20.474767 | orchestrator | 2026-04-02 00:02:20.474851 | orchestrator | OpenTofu has been successfully initialized! 2026-04-02 00:02:20.474857 | orchestrator | 2026-04-02 00:02:20.474862 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-04-02 00:02:20.474866 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-04-02 00:02:20.474871 | orchestrator | should now work. 2026-04-02 00:02:20.474875 | orchestrator | 2026-04-02 00:02:20.474879 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-04-02 00:02:20.474883 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-04-02 00:02:20.474888 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-04-02 00:02:21.131496 | orchestrator | Created and switched to workspace "ci"! 2026-04-02 00:02:21.131575 | orchestrator | 2026-04-02 00:02:21.131588 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-04-02 00:02:21.131599 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-04-02 00:02:21.131611 | orchestrator | for this configuration. 2026-04-02 00:02:21.260074 | orchestrator | ci.auto.tfvars 2026-04-02 00:02:21.617594 | orchestrator | default_custom.tf 2026-04-02 00:02:22.578371 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-04-02 00:02:23.159549 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-04-02 00:02:23.470179 | orchestrator | 2026-04-02 00:02:23.470237 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-04-02 00:02:23.470243 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-04-02 00:02:23.470248 | orchestrator | + create 2026-04-02 00:02:23.470253 | orchestrator | <= read (data resources) 2026-04-02 00:02:23.470257 | orchestrator | 2026-04-02 00:02:23.470262 | orchestrator | OpenTofu will perform the following actions: 2026-04-02 00:02:23.470266 | orchestrator | 2026-04-02 00:02:23.470270 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-04-02 00:02:23.470274 | orchestrator | # (config refers to values not yet known) 2026-04-02 00:02:23.470278 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-04-02 00:02:23.470282 | orchestrator | + checksum = (known after apply) 2026-04-02 00:02:23.470286 | orchestrator | + created_at = (known after apply) 2026-04-02 00:02:23.470290 | orchestrator | + file = (known after apply) 2026-04-02 00:02:23.470294 | orchestrator | + id = (known after apply) 2026-04-02 00:02:23.470311 | orchestrator | + metadata = (known after apply) 2026-04-02 00:02:23.470316 | orchestrator | + min_disk_gb = (known after apply) 2026-04-02 00:02:23.470320 | orchestrator | + min_ram_mb = (known after apply) 2026-04-02 00:02:23.470335 | orchestrator | + most_recent = true 2026-04-02 00:02:23.470339 | orchestrator | + name = (known after apply) 2026-04-02 00:02:23.470343 | orchestrator | + protected = (known after apply) 2026-04-02 00:02:23.470347 | orchestrator | + region = (known after apply) 2026-04-02 00:02:23.470353 | orchestrator | + schema = (known after apply) 2026-04-02 00:02:23.470357 | orchestrator | + size_bytes = (known after apply) 2026-04-02 00:02:23.470361 | orchestrator | + tags = (known after apply) 2026-04-02 00:02:23.470365 | orchestrator | + updated_at = (known after apply) 2026-04-02 00:02:23.470369 | orchestrator | } 2026-04-02 00:02:23.470373 | orchestrator | 2026-04-02 00:02:23.470377 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-04-02 00:02:23.470381 | orchestrator | # (config refers to values not yet known) 2026-04-02 00:02:23.470385 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-04-02 00:02:23.470389 | orchestrator | + checksum = (known after apply) 2026-04-02 00:02:23.470393 | orchestrator | + created_at = (known after apply) 2026-04-02 00:02:23.470397 | orchestrator | + file = (known after apply) 2026-04-02 00:02:23.470400 | orchestrator | + id = (known after apply) 2026-04-02 00:02:23.470404 | orchestrator | + metadata = (known after apply) 2026-04-02 00:02:23.470408 | orchestrator | + min_disk_gb = (known after apply) 2026-04-02 00:02:23.470412 | orchestrator | + min_ram_mb = (known after apply) 2026-04-02 00:02:23.470416 | orchestrator | + most_recent = true 2026-04-02 00:02:23.470420 | orchestrator | + name = (known after apply) 2026-04-02 00:02:23.470423 | orchestrator | + protected = (known after apply) 2026-04-02 00:02:23.470427 | orchestrator | + region = (known after apply) 2026-04-02 00:02:23.470431 | orchestrator | + schema = (known after apply) 2026-04-02 00:02:23.470435 | orchestrator | + size_bytes = (known after apply) 2026-04-02 00:02:23.470439 | orchestrator | + tags = (known after apply) 2026-04-02 00:02:23.470442 | orchestrator | + updated_at = (known after apply) 2026-04-02 00:02:23.470446 | orchestrator | } 2026-04-02 00:02:23.470450 | orchestrator | 2026-04-02 00:02:23.470454 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-04-02 00:02:23.470458 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-04-02 00:02:23.470462 | orchestrator | + content = (known after apply) 2026-04-02 00:02:23.470466 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-02 00:02:23.470470 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-02 00:02:23.470474 | orchestrator | + content_md5 = (known after apply) 2026-04-02 00:02:23.470478 | orchestrator | + content_sha1 = (known after apply) 2026-04-02 00:02:23.470482 | orchestrator | + content_sha256 = (known after apply) 2026-04-02 00:02:23.470485 | orchestrator | + content_sha512 = (known after apply) 2026-04-02 00:02:23.470489 | orchestrator | + directory_permission = "0777" 2026-04-02 00:02:23.470493 | orchestrator | + file_permission = "0644" 2026-04-02 00:02:23.470497 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-04-02 00:02:23.470501 | orchestrator | + id = (known after apply) 2026-04-02 00:02:23.470505 | orchestrator | } 2026-04-02 00:02:23.470508 | orchestrator | 2026-04-02 00:02:23.470512 | orchestrator | # local_file.id_rsa_pub will be created 2026-04-02 00:02:23.470516 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-04-02 00:02:23.470520 | orchestrator | + content = (known after apply) 2026-04-02 00:02:23.470524 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-02 00:02:23.470527 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-02 00:02:23.470531 | orchestrator | + content_md5 = (known after apply) 2026-04-02 00:02:23.470535 | orchestrator | + content_sha1 = (known after apply) 2026-04-02 00:02:23.470539 | orchestrator | + content_sha256 = (known after apply) 2026-04-02 00:02:23.470547 | orchestrator | + content_sha512 = (known after apply) 2026-04-02 00:02:23.470551 | orchestrator | + directory_permission = "0777" 2026-04-02 00:02:23.470555 | orchestrator | + file_permission = "0644" 2026-04-02 00:02:23.470563 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-04-02 00:02:23.470567 | orchestrator | + id = (known after apply) 2026-04-02 00:02:23.470571 | orchestrator | } 2026-04-02 00:02:23.470575 | orchestrator | 2026-04-02 00:02:23.470578 | orchestrator | # local_file.inventory will be created 2026-04-02 00:02:23.470582 | orchestrator | + resource "local_file" "inventory" { 2026-04-02 00:02:23.470586 | orchestrator | + content = (known after apply) 2026-04-02 00:02:23.470590 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-02 00:02:23.470594 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-02 00:02:23.470597 | orchestrator | + content_md5 = (known after apply) 2026-04-02 00:02:23.470601 | orchestrator | + content_sha1 = (known after apply) 2026-04-02 00:02:23.470605 | orchestrator | + content_sha256 = (known after apply) 2026-04-02 00:02:23.470609 | orchestrator | + content_sha512 = (known after apply) 2026-04-02 00:02:23.470613 | orchestrator | + directory_permission = "0777" 2026-04-02 00:02:23.470617 | orchestrator | + file_permission = "0644" 2026-04-02 00:02:23.470621 | orchestrator | + filename = "inventory.ci" 2026-04-02 00:02:23.470624 | orchestrator | + id = (known after apply) 2026-04-02 00:02:23.470628 | orchestrator | } 2026-04-02 00:02:23.470632 | orchestrator | 2026-04-02 00:02:23.470636 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-04-02 00:02:23.470640 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-04-02 00:02:23.470643 | orchestrator | + content = (sensitive value) 2026-04-02 00:02:23.470647 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-02 00:02:23.470651 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-02 00:02:23.470655 | orchestrator | + content_md5 = (known after apply) 2026-04-02 00:02:23.470659 | orchestrator | + content_sha1 = (known after apply) 2026-04-02 00:02:23.470663 | orchestrator | + content_sha256 = (known after apply) 2026-04-02 00:02:23.470675 | orchestrator | + content_sha512 = (known after apply) 2026-04-02 00:02:23.470679 | orchestrator | + directory_permission = "0700" 2026-04-02 00:02:23.470683 | orchestrator | + file_permission = "0600" 2026-04-02 00:02:23.470687 | orchestrator | + filename = ".id_rsa.ci" 2026-04-02 00:02:23.470691 | orchestrator | + id = (known after apply) 2026-04-02 00:02:23.470695 | orchestrator | } 2026-04-02 00:02:23.470698 | orchestrator | 2026-04-02 00:02:23.470702 | orchestrator | # null_resource.node_semaphore will be created 2026-04-02 00:02:23.470706 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-04-02 00:02:23.470710 | orchestrator | + id = (known after apply) 2026-04-02 00:02:23.470714 | orchestrator | } 2026-04-02 00:02:23.470718 | orchestrator | 2026-04-02 00:02:23.470721 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-04-02 00:02:23.470725 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-04-02 00:02:23.470729 | orchestrator | + attachment = (known after apply) 2026-04-02 00:02:23.470733 | orchestrator | + availability_zone = "nova" 2026-04-02 00:02:23.470737 | orchestrator | + id = (known after apply) 2026-04-02 00:02:23.470740 | orchestrator | + image_id = (known after apply) 2026-04-02 00:02:23.470744 | orchestrator | + metadata = (known after apply) 2026-04-02 00:02:23.470748 | orchestrator | + name = "testbed-volume-manager-base" 2026-04-02 00:02:23.470752 | orchestrator | + region = (known after apply) 2026-04-02 00:02:23.470756 | orchestrator | + size = 80 2026-04-02 00:02:23.470760 | orchestrator | + volume_retype_policy = "never" 2026-04-02 00:02:23.470764 | orchestrator | + volume_type = "ssd" 2026-04-02 00:02:23.470767 | orchestrator | } 2026-04-02 00:02:23.470771 | orchestrator | 2026-04-02 00:02:23.470775 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-04-02 00:02:23.470779 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-02 00:02:23.470783 | orchestrator | + attachment = (known after apply) 2026-04-02 00:02:23.470787 | orchestrator | + availability_zone = "nova" 2026-04-02 00:02:23.470790 | orchestrator | + id = (known after apply) 2026-04-02 00:02:23.470797 | orchestrator | + image_id = (known after apply) 2026-04-02 00:02:23.470801 | orchestrator | + metadata = (known after apply) 2026-04-02 00:02:23.470805 | orchestrator | + name = "testbed-volume-0-node-base" 2026-04-02 00:02:23.470809 | orchestrator | + region = (known after apply) 2026-04-02 00:02:23.470812 | orchestrator | + size = 80 2026-04-02 00:02:23.470816 | orchestrator | + volume_retype_policy = "never" 2026-04-02 00:02:23.470820 | orchestrator | + volume_type = "ssd" 2026-04-02 00:02:23.470824 | orchestrator | } 2026-04-02 00:02:23.470828 | orchestrator | 2026-04-02 00:02:23.470832 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-04-02 00:02:23.470835 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-02 00:02:23.470839 | orchestrator | + attachment = (known after apply) 2026-04-02 00:02:23.470843 | orchestrator | + availability_zone = "nova" 2026-04-02 00:02:23.470847 | orchestrator | + id = (known after apply) 2026-04-02 00:02:23.470851 | orchestrator | + image_id = (known after apply) 2026-04-02 00:02:23.470855 | orchestrator | + metadata = (known after apply) 2026-04-02 00:02:23.470858 | orchestrator | + name = "testbed-volume-1-node-base" 2026-04-02 00:02:23.470862 | orchestrator | + region = (known after apply) 2026-04-02 00:02:23.470866 | orchestrator | + size = 80 2026-04-02 00:02:23.470870 | orchestrator | + volume_retype_policy = "never" 2026-04-02 00:02:23.470873 | orchestrator | + volume_type = "ssd" 2026-04-02 00:02:23.470877 | orchestrator | } 2026-04-02 00:02:23.470881 | orchestrator | 2026-04-02 00:02:23.470885 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-04-02 00:02:23.470889 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-02 00:02:23.470893 | orchestrator | + attachment = (known after apply) 2026-04-02 00:02:23.470896 | orchestrator | + availability_zone = "nova" 2026-04-02 00:02:23.470900 | orchestrator | + id = (known after apply) 2026-04-02 00:02:23.470904 | orchestrator | + image_id = (known after apply) 2026-04-02 00:02:23.470908 | orchestrator | + metadata = (known after apply) 2026-04-02 00:02:23.470912 | orchestrator | + name = "testbed-volume-2-node-base" 2026-04-02 00:02:23.470916 | orchestrator | + region = (known after apply) 2026-04-02 00:02:23.470919 | orchestrator | + size = 80 2026-04-02 00:02:23.470926 | orchestrator | + volume_retype_policy = "never" 2026-04-02 00:02:23.470930 | orchestrator | + volume_type = "ssd" 2026-04-02 00:02:23.470933 | orchestrator | } 2026-04-02 00:02:23.470937 | orchestrator | 2026-04-02 00:02:23.470941 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-04-02 00:02:23.470945 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-02 00:02:23.470949 | orchestrator | + attachment = (known after apply) 2026-04-02 00:02:23.470952 | orchestrator | + availability_zone = "nova" 2026-04-02 00:02:23.470956 | orchestrator | + id = (known after apply) 2026-04-02 00:02:23.470960 | orchestrator | + image_id = (known after apply) 2026-04-02 00:02:23.470964 | orchestrator | + metadata = (known after apply) 2026-04-02 00:02:23.470968 | orchestrator | + name = "testbed-volume-3-node-base" 2026-04-02 00:02:23.470972 | orchestrator | + region = (known after apply) 2026-04-02 00:02:23.470975 | orchestrator | + size = 80 2026-04-02 00:02:23.470979 | orchestrator | + volume_retype_policy = "never" 2026-04-02 00:02:23.470983 | orchestrator | + volume_type = "ssd" 2026-04-02 00:02:23.470987 | orchestrator | } 2026-04-02 00:02:23.470991 | orchestrator | 2026-04-02 00:02:23.470994 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-04-02 00:02:23.470998 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-02 00:02:23.471002 | orchestrator | + attachment = (known after apply) 2026-04-02 00:02:23.471006 | orchestrator | + availability_zone = "nova" 2026-04-02 00:02:23.471010 | orchestrator | + id = (known after apply) 2026-04-02 00:02:23.471016 | orchestrator | + image_id = (known after apply) 2026-04-02 00:02:23.471020 | orchestrator | + metadata = (known after apply) 2026-04-02 00:02:23.471024 | orchestrator | + name = "testbed-volume-4-node-base" 2026-04-02 00:02:23.471028 | orchestrator | + region = (known after apply) 2026-04-02 00:02:23.471032 | orchestrator | + size = 80 2026-04-02 00:02:23.471036 | orchestrator | + volume_retype_policy = "never" 2026-04-02 00:02:23.471039 | orchestrator | + volume_type = "ssd" 2026-04-02 00:02:23.471043 | orchestrator | } 2026-04-02 00:02:23.471047 | orchestrator | 2026-04-02 00:02:23.471051 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-04-02 00:02:23.471057 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-02 00:02:23.471061 | orchestrator | + attachment = (known after apply) 2026-04-02 00:02:23.471065 | orchestrator | + availability_zone = "nova" 2026-04-02 00:02:23.471069 | orchestrator | + id = (known after apply) 2026-04-02 00:02:23.471073 | orchestrator | + image_id = (known after apply) 2026-04-02 00:02:23.471077 | orchestrator | + metadata = (known after apply) 2026-04-02 00:02:23.471080 | orchestrator | + name = "testbed-volume-5-node-base" 2026-04-02 00:02:23.471084 | orchestrator | + region = (known after apply) 2026-04-02 00:02:23.471088 | orchestrator | + size = 80 2026-04-02 00:02:23.471092 | orchestrator | + volume_retype_policy = "never" 2026-04-02 00:02:23.471095 | orchestrator | + volume_type = "ssd" 2026-04-02 00:02:23.471099 | orchestrator | } 2026-04-02 00:02:23.471103 | orchestrator | 2026-04-02 00:02:23.471107 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-04-02 00:02:23.471111 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-02 00:02:23.471115 | orchestrator | + attachment = (known after apply) 2026-04-02 00:02:23.471118 | orchestrator | + availability_zone = "nova" 2026-04-02 00:02:23.471122 | orchestrator | + id = (known after apply) 2026-04-02 00:02:23.471126 | orchestrator | + metadata = (known after apply) 2026-04-02 00:02:23.471130 | orchestrator | + name = "testbed-volume-0-node-3" 2026-04-02 00:02:23.471134 | orchestrator | + region = (known after apply) 2026-04-02 00:02:23.471137 | orchestrator | + size = 20 2026-04-02 00:02:23.471141 | orchestrator | + volume_retype_policy = "never" 2026-04-02 00:02:23.471145 | orchestrator | + volume_type = "ssd" 2026-04-02 00:02:23.471149 | orchestrator | } 2026-04-02 00:02:23.471153 | orchestrator | 2026-04-02 00:02:23.471156 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-04-02 00:02:23.471160 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-02 00:02:23.471164 | orchestrator | + attachment = (known after apply) 2026-04-02 00:02:23.471168 | orchestrator | + availability_zone = "nova" 2026-04-02 00:02:23.471171 | orchestrator | + id = (known after apply) 2026-04-02 00:02:23.471175 | orchestrator | + metadata = (known after apply) 2026-04-02 00:02:23.471179 | orchestrator | + name = "testbed-volume-1-node-4" 2026-04-02 00:02:23.471183 | orchestrator | + region = (known after apply) 2026-04-02 00:02:23.471187 | orchestrator | + size = 20 2026-04-02 00:02:23.471190 | orchestrator | + volume_retype_policy = "never" 2026-04-02 00:02:23.471194 | orchestrator | + volume_type = "ssd" 2026-04-02 00:02:23.471198 | orchestrator | } 2026-04-02 00:02:23.471202 | orchestrator | 2026-04-02 00:02:23.471206 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-04-02 00:02:23.471209 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-02 00:02:23.471213 | orchestrator | + attachment = (known after apply) 2026-04-02 00:02:23.471217 | orchestrator | + availability_zone = "nova" 2026-04-02 00:02:23.471221 | orchestrator | + id = (known after apply) 2026-04-02 00:02:23.471225 | orchestrator | + metadata = (known after apply) 2026-04-02 00:02:23.471228 | orchestrator | + name = "testbed-volume-2-node-5" 2026-04-02 00:02:23.471232 | orchestrator | + region = (known after apply) 2026-04-02 00:02:23.471239 | orchestrator | + size = 20 2026-04-02 00:02:23.471242 | orchestrator | + volume_retype_policy = "never" 2026-04-02 00:02:23.471246 | orchestrator | + volume_type = "ssd" 2026-04-02 00:02:23.471250 | orchestrator | } 2026-04-02 00:02:23.471254 | orchestrator | 2026-04-02 00:02:23.471258 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-04-02 00:02:23.471261 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-02 00:02:23.471265 | orchestrator | + attachment = (known after apply) 2026-04-02 00:02:23.471269 | orchestrator | + availability_zone = "nova" 2026-04-02 00:02:23.471273 | orchestrator | + id = (known after apply) 2026-04-02 00:02:23.471279 | orchestrator | + metadata = (known after apply) 2026-04-02 00:02:23.471283 | orchestrator | + name = "testbed-volume-3-node-3" 2026-04-02 00:02:23.471287 | orchestrator | + region = (known after apply) 2026-04-02 00:02:23.471291 | orchestrator | + size = 20 2026-04-02 00:02:23.471294 | orchestrator | + volume_retype_policy = "never" 2026-04-02 00:02:23.471298 | orchestrator | + volume_type = "ssd" 2026-04-02 00:02:23.471302 | orchestrator | } 2026-04-02 00:02:23.471306 | orchestrator | 2026-04-02 00:02:23.471309 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-04-02 00:02:23.471313 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-02 00:02:23.471317 | orchestrator | + attachment = (known after apply) 2026-04-02 00:02:23.471321 | orchestrator | + availability_zone = "nova" 2026-04-02 00:02:23.471354 | orchestrator | + id = (known after apply) 2026-04-02 00:02:23.471358 | orchestrator | + metadata = (known after apply) 2026-04-02 00:02:23.471362 | orchestrator | + name = "testbed-volume-4-node-4" 2026-04-02 00:02:23.471366 | orchestrator | + region = (known after apply) 2026-04-02 00:02:23.471370 | orchestrator | + size = 20 2026-04-02 00:02:23.471373 | orchestrator | + volume_retype_policy = "never" 2026-04-02 00:02:23.471377 | orchestrator | + volume_type = "ssd" 2026-04-02 00:02:23.471381 | orchestrator | } 2026-04-02 00:02:23.471385 | orchestrator | 2026-04-02 00:02:23.471388 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-04-02 00:02:23.471392 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-02 00:02:23.471396 | orchestrator | + attachment = (known after apply) 2026-04-02 00:02:23.471400 | orchestrator | + availability_zone = "nova" 2026-04-02 00:02:23.471404 | orchestrator | + id = (known after apply) 2026-04-02 00:02:23.471407 | orchestrator | + metadata = (known after apply) 2026-04-02 00:02:23.471411 | orchestrator | + name = "testbed-volume-5-node-5" 2026-04-02 00:02:23.471415 | orchestrator | + region = (known after apply) 2026-04-02 00:02:23.471419 | orchestrator | + size = 20 2026-04-02 00:02:23.471423 | orchestrator | + volume_retype_policy = "never" 2026-04-02 00:02:23.471426 | orchestrator | + volume_type = "ssd" 2026-04-02 00:02:23.471430 | orchestrator | } 2026-04-02 00:02:23.471434 | orchestrator | 2026-04-02 00:02:23.471438 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-04-02 00:02:23.471441 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-02 00:02:23.471445 | orchestrator | + attachment = (known after apply) 2026-04-02 00:02:23.471449 | orchestrator | + availability_zone = "nova" 2026-04-02 00:02:23.471453 | orchestrator | + id = (known after apply) 2026-04-02 00:02:23.471460 | orchestrator | + metadata = (known after apply) 2026-04-02 00:02:23.471464 | orchestrator | + name = "testbed-volume-6-node-3" 2026-04-02 00:02:23.471468 | orchestrator | + region = (known after apply) 2026-04-02 00:02:23.471471 | orchestrator | + size = 20 2026-04-02 00:02:23.471475 | orchestrator | + volume_retype_policy = "never" 2026-04-02 00:02:23.471479 | orchestrator | + volume_type = "ssd" 2026-04-02 00:02:23.471483 | orchestrator | } 2026-04-02 00:02:23.471487 | orchestrator | 2026-04-02 00:02:23.471490 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-04-02 00:02:23.471494 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-02 00:02:23.471501 | orchestrator | + attachment = (known after apply) 2026-04-02 00:02:23.471505 | orchestrator | + availability_zone = "nova" 2026-04-02 00:02:23.471509 | orchestrator | + id = (known after apply) 2026-04-02 00:02:23.471513 | orchestrator | + metadata = (known after apply) 2026-04-02 00:02:23.471516 | orchestrator | + name = "testbed-volume-7-node-4" 2026-04-02 00:02:23.471520 | orchestrator | + region = (known after apply) 2026-04-02 00:02:23.471524 | orchestrator | + size = 20 2026-04-02 00:02:23.471528 | orchestrator | + volume_retype_policy = "never" 2026-04-02 00:02:23.471532 | orchestrator | + volume_type = "ssd" 2026-04-02 00:02:23.471535 | orchestrator | } 2026-04-02 00:02:23.471539 | orchestrator | 2026-04-02 00:02:23.471543 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-04-02 00:02:23.471547 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-02 00:02:23.471551 | orchestrator | + attachment = (known after apply) 2026-04-02 00:02:23.471554 | orchestrator | + availability_zone = "nova" 2026-04-02 00:02:23.471558 | orchestrator | + id = (known after apply) 2026-04-02 00:02:23.471562 | orchestrator | + metadata = (known after apply) 2026-04-02 00:02:23.471566 | orchestrator | + name = "testbed-volume-8-node-5" 2026-04-02 00:02:23.471570 | orchestrator | + region = (known after apply) 2026-04-02 00:02:23.471573 | orchestrator | + size = 20 2026-04-02 00:02:23.471577 | orchestrator | + volume_retype_policy = "never" 2026-04-02 00:02:23.471582 | orchestrator | + volume_type = "ssd" 2026-04-02 00:02:23.471585 | orchestrator | } 2026-04-02 00:02:23.471589 | orchestrator | 2026-04-02 00:02:23.471593 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-04-02 00:02:23.471597 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-04-02 00:02:23.471601 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-02 00:02:23.471604 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-02 00:02:23.471608 | orchestrator | + all_metadata = (known after apply) 2026-04-02 00:02:23.471612 | orchestrator | + all_tags = (known after apply) 2026-04-02 00:02:23.471616 | orchestrator | + availability_zone = "nova" 2026-04-02 00:02:23.471620 | orchestrator | + config_drive = true 2026-04-02 00:02:23.471626 | orchestrator | + created = (known after apply) 2026-04-02 00:02:23.471629 | orchestrator | + flavor_id = (known after apply) 2026-04-02 00:02:23.471633 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-04-02 00:02:23.471637 | orchestrator | + force_delete = false 2026-04-02 00:02:23.471641 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-02 00:02:23.471645 | orchestrator | + id = (known after apply) 2026-04-02 00:02:23.471648 | orchestrator | + image_id = (known after apply) 2026-04-02 00:02:23.471652 | orchestrator | + image_name = (known after apply) 2026-04-02 00:02:23.471656 | orchestrator | + key_pair = "testbed" 2026-04-02 00:02:23.471660 | orchestrator | + name = "testbed-manager" 2026-04-02 00:02:23.471664 | orchestrator | + power_state = "active" 2026-04-02 00:02:23.471667 | orchestrator | + region = (known after apply) 2026-04-02 00:02:23.471671 | orchestrator | + security_groups = (known after apply) 2026-04-02 00:02:23.471675 | orchestrator | + stop_before_destroy = false 2026-04-02 00:02:23.471679 | orchestrator | + updated = (known after apply) 2026-04-02 00:02:23.471683 | orchestrator | + user_data = (sensitive value) 2026-04-02 00:02:23.471687 | orchestrator | 2026-04-02 00:02:23.471691 | orchestrator | + block_device { 2026-04-02 00:02:23.471694 | orchestrator | + boot_index = 0 2026-04-02 00:02:23.471698 | orchestrator | + delete_on_termination = false 2026-04-02 00:02:23.471702 | orchestrator | + destination_type = "volume" 2026-04-02 00:02:23.471706 | orchestrator | + multiattach = false 2026-04-02 00:02:23.471710 | orchestrator | + source_type = "volume" 2026-04-02 00:02:23.471713 | orchestrator | + uuid = (known after apply) 2026-04-02 00:02:23.471720 | orchestrator | } 2026-04-02 00:02:23.471724 | orchestrator | 2026-04-02 00:02:23.471728 | orchestrator | + network { 2026-04-02 00:02:23.471732 | orchestrator | + access_network = false 2026-04-02 00:02:23.471736 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-02 00:02:23.471740 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-02 00:02:23.471743 | orchestrator | + mac = (known after apply) 2026-04-02 00:02:23.471747 | orchestrator | + name = (known after apply) 2026-04-02 00:02:23.471751 | orchestrator | + port = (known after apply) 2026-04-02 00:02:23.471755 | orchestrator | + uuid = (known after apply) 2026-04-02 00:02:23.471758 | orchestrator | } 2026-04-02 00:02:23.471762 | orchestrator | } 2026-04-02 00:02:23.471766 | orchestrator | 2026-04-02 00:02:23.471770 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-04-02 00:02:23.471774 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-02 00:02:23.471778 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-02 00:02:23.471781 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-02 00:02:23.471785 | orchestrator | + all_metadata = (known after apply) 2026-04-02 00:02:23.471789 | orchestrator | + all_tags = (known after apply) 2026-04-02 00:02:23.471793 | orchestrator | + availability_zone = "nova" 2026-04-02 00:02:23.471797 | orchestrator | + config_drive = true 2026-04-02 00:02:23.471800 | orchestrator | + created = (known after apply) 2026-04-02 00:02:23.471804 | orchestrator | + flavor_id = (known after apply) 2026-04-02 00:02:23.471808 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-02 00:02:23.471812 | orchestrator | + force_delete = false 2026-04-02 00:02:23.471815 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-02 00:02:23.471819 | orchestrator | + id = (known after apply) 2026-04-02 00:02:23.471823 | orchestrator | + image_id = (known after apply) 2026-04-02 00:02:23.471827 | orchestrator | + image_name = (known after apply) 2026-04-02 00:02:23.471831 | orchestrator | + key_pair = "testbed" 2026-04-02 00:02:23.471835 | orchestrator | + name = "testbed-node-0" 2026-04-02 00:02:23.471838 | orchestrator | + power_state = "active" 2026-04-02 00:02:23.471845 | orchestrator | + region = (known after apply) 2026-04-02 00:02:23.471849 | orchestrator | + security_groups = (known after apply) 2026-04-02 00:02:23.471853 | orchestrator | + stop_before_destroy = false 2026-04-02 00:02:23.471856 | orchestrator | + updated = (known after apply) 2026-04-02 00:02:23.471860 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-02 00:02:23.471864 | orchestrator | 2026-04-02 00:02:23.471868 | orchestrator | + block_device { 2026-04-02 00:02:23.471872 | orchestrator | + boot_index = 0 2026-04-02 00:02:23.471876 | orchestrator | + delete_on_termination = false 2026-04-02 00:02:23.471879 | orchestrator | + destination_type = "volume" 2026-04-02 00:02:23.471883 | orchestrator | + multiattach = false 2026-04-02 00:02:23.471887 | orchestrator | + source_type = "volume" 2026-04-02 00:02:23.471891 | orchestrator | + uuid = (known after apply) 2026-04-02 00:02:23.471894 | orchestrator | } 2026-04-02 00:02:23.471898 | orchestrator | 2026-04-02 00:02:23.471902 | orchestrator | + network { 2026-04-02 00:02:23.471906 | orchestrator | + access_network = false 2026-04-02 00:02:23.471910 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-02 00:02:23.471913 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-02 00:02:23.471917 | orchestrator | + mac = (known after apply) 2026-04-02 00:02:23.471921 | orchestrator | + name = (known after apply) 2026-04-02 00:02:23.471925 | orchestrator | + port = (known after apply) 2026-04-02 00:02:23.471928 | orchestrator | + uuid = (known after apply) 2026-04-02 00:02:23.471932 | orchestrator | } 2026-04-02 00:02:23.471936 | orchestrator | } 2026-04-02 00:02:23.471940 | orchestrator | 2026-04-02 00:02:23.471944 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-04-02 00:02:23.471948 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-02 00:02:23.471951 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-02 00:02:23.471962 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-02 00:02:23.471966 | orchestrator | + all_metadata = (known after apply) 2026-04-02 00:02:23.471970 | orchestrator | + all_tags = (known after apply) 2026-04-02 00:02:23.471974 | orchestrator | + availability_zone = "nova" 2026-04-02 00:02:23.471977 | orchestrator | + config_drive = true 2026-04-02 00:02:23.471981 | orchestrator | + created = (known after apply) 2026-04-02 00:02:23.471985 | orchestrator | + flavor_id = (known after apply) 2026-04-02 00:02:23.471989 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-02 00:02:23.471993 | orchestrator | + force_delete = false 2026-04-02 00:02:23.471997 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-02 00:02:23.472000 | orchestrator | + id = (known after apply) 2026-04-02 00:02:23.472004 | orchestrator | + image_id = (known after apply) 2026-04-02 00:02:23.472008 | orchestrator | + image_name = (known after apply) 2026-04-02 00:02:23.472012 | orchestrator | + key_pair = "testbed" 2026-04-02 00:02:23.472016 | orchestrator | + name = "testbed-node-1" 2026-04-02 00:02:23.472020 | orchestrator | + power_state = "active" 2026-04-02 00:02:23.472023 | orchestrator | + region = (known after apply) 2026-04-02 00:02:23.472027 | orchestrator | + security_groups = (known after apply) 2026-04-02 00:02:23.472031 | orchestrator | + stop_before_destroy = false 2026-04-02 00:02:23.472035 | orchestrator | + updated = (known after apply) 2026-04-02 00:02:23.472041 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-02 00:02:23.472045 | orchestrator | 2026-04-02 00:02:23.472049 | orchestrator | + block_device { 2026-04-02 00:02:23.472053 | orchestrator | + boot_index = 0 2026-04-02 00:02:23.472057 | orchestrator | + delete_on_termination = false 2026-04-02 00:02:23.472060 | orchestrator | + destination_type = "volume" 2026-04-02 00:02:23.472064 | orchestrator | + multiattach = false 2026-04-02 00:02:23.472068 | orchestrator | + source_type = "volume" 2026-04-02 00:02:23.472072 | orchestrator | + uuid = (known after apply) 2026-04-02 00:02:23.472076 | orchestrator | } 2026-04-02 00:02:23.472079 | orchestrator | 2026-04-02 00:02:23.472083 | orchestrator | + network { 2026-04-02 00:02:23.472087 | orchestrator | + access_network = false 2026-04-02 00:02:23.472091 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-02 00:02:23.472095 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-02 00:02:23.472099 | orchestrator | + mac = (known after apply) 2026-04-02 00:02:23.472102 | orchestrator | + name = (known after apply) 2026-04-02 00:02:23.472106 | orchestrator | + port = (known after apply) 2026-04-02 00:02:23.472110 | orchestrator | + uuid = (known after apply) 2026-04-02 00:02:23.472114 | orchestrator | } 2026-04-02 00:02:23.472118 | orchestrator | } 2026-04-02 00:02:23.472122 | orchestrator | 2026-04-02 00:02:23.472125 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-04-02 00:02:23.472129 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-02 00:02:23.472133 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-02 00:02:23.472137 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-02 00:02:23.472141 | orchestrator | + all_metadata = (known after apply) 2026-04-02 00:02:23.472145 | orchestrator | + all_tags = (known after apply) 2026-04-02 00:02:23.472149 | orchestrator | + availability_zone = "nova" 2026-04-02 00:02:23.472153 | orchestrator | + config_drive = true 2026-04-02 00:02:23.472156 | orchestrator | + created = (known after apply) 2026-04-02 00:02:23.472160 | orchestrator | + flavor_id = (known after apply) 2026-04-02 00:02:23.472164 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-02 00:02:23.472168 | orchestrator | + force_delete = false 2026-04-02 00:02:23.472172 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-02 00:02:23.472175 | orchestrator | + id = (known after apply) 2026-04-02 00:02:23.472179 | orchestrator | + image_id = (known after apply) 2026-04-02 00:02:23.472186 | orchestrator | + image_name = (known after apply) 2026-04-02 00:02:23.472190 | orchestrator | + key_pair = "testbed" 2026-04-02 00:02:23.472193 | orchestrator | + name = "testbed-node-2" 2026-04-02 00:02:23.472197 | orchestrator | + power_state = "active" 2026-04-02 00:02:23.472201 | orchestrator | + region = (known after apply) 2026-04-02 00:02:23.472205 | orchestrator | + security_groups = (known after apply) 2026-04-02 00:02:23.472209 | orchestrator | + stop_before_destroy = false 2026-04-02 00:02:23.472213 | orchestrator | + updated = (known after apply) 2026-04-02 00:02:23.472216 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-02 00:02:23.472220 | orchestrator | 2026-04-02 00:02:23.472224 | orchestrator | + block_device { 2026-04-02 00:02:23.472228 | orchestrator | + boot_index = 0 2026-04-02 00:02:23.472232 | orchestrator | + delete_on_termination = false 2026-04-02 00:02:23.472236 | orchestrator | + destination_type = "volume" 2026-04-02 00:02:23.472242 | orchestrator | + multiattach = false 2026-04-02 00:02:23.472246 | orchestrator | + source_type = "volume" 2026-04-02 00:02:23.472249 | orchestrator | + uuid = (known after apply) 2026-04-02 00:02:23.472253 | orchestrator | } 2026-04-02 00:02:23.472257 | orchestrator | 2026-04-02 00:02:23.472261 | orchestrator | + network { 2026-04-02 00:02:23.472265 | orchestrator | + access_network = false 2026-04-02 00:02:23.472268 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-02 00:02:23.472272 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-02 00:02:23.472276 | orchestrator | + mac = (known after apply) 2026-04-02 00:02:23.472280 | orchestrator | + name = (known after apply) 2026-04-02 00:02:23.472284 | orchestrator | + port = (known after apply) 2026-04-02 00:02:23.472288 | orchestrator | + uuid = (known after apply) 2026-04-02 00:02:23.472292 | orchestrator | } 2026-04-02 00:02:23.472295 | orchestrator | } 2026-04-02 00:02:23.472299 | orchestrator | 2026-04-02 00:02:23.472305 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-04-02 00:02:23.472309 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-02 00:02:23.472313 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-02 00:02:23.472317 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-02 00:02:23.472321 | orchestrator | + all_metadata = (known after apply) 2026-04-02 00:02:23.472336 | orchestrator | + all_tags = (known after apply) 2026-04-02 00:02:23.472340 | orchestrator | + availability_zone = "nova" 2026-04-02 00:02:23.472344 | orchestrator | + config_drive = true 2026-04-02 00:02:23.472347 | orchestrator | + created = (known after apply) 2026-04-02 00:02:23.472351 | orchestrator | + flavor_id = (known after apply) 2026-04-02 00:02:23.472355 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-02 00:02:23.472359 | orchestrator | + force_delete = false 2026-04-02 00:02:23.472363 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-02 00:02:23.472367 | orchestrator | + id = (known after apply) 2026-04-02 00:02:23.472371 | orchestrator | + image_id = (known after apply) 2026-04-02 00:02:23.472374 | orchestrator | + image_name = (known after apply) 2026-04-02 00:02:23.472378 | orchestrator | + key_pair = "testbed" 2026-04-02 00:02:23.472382 | orchestrator | + name = "testbed-node-3" 2026-04-02 00:02:23.472386 | orchestrator | + power_state = "active" 2026-04-02 00:02:23.472390 | orchestrator | + region = (known after apply) 2026-04-02 00:02:23.472394 | orchestrator | + security_groups = (known after apply) 2026-04-02 00:02:23.472398 | orchestrator | + stop_before_destroy = false 2026-04-02 00:02:23.472401 | orchestrator | + updated = (known after apply) 2026-04-02 00:02:23.472405 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-02 00:02:23.472409 | orchestrator | 2026-04-02 00:02:23.472413 | orchestrator | + block_device { 2026-04-02 00:02:23.472417 | orchestrator | + boot_index = 0 2026-04-02 00:02:23.472421 | orchestrator | + delete_on_termination = false 2026-04-02 00:02:23.472424 | orchestrator | + destination_type = "volume" 2026-04-02 00:02:23.472431 | orchestrator | + multiattach = false 2026-04-02 00:02:23.472435 | orchestrator | + source_type = "volume" 2026-04-02 00:02:23.472439 | orchestrator | + uuid = (known after apply) 2026-04-02 00:02:23.472443 | orchestrator | } 2026-04-02 00:02:23.472447 | orchestrator | 2026-04-02 00:02:23.472451 | orchestrator | + network { 2026-04-02 00:02:23.472454 | orchestrator | + access_network = false 2026-04-02 00:02:23.472458 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-02 00:02:23.472462 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-02 00:02:23.472466 | orchestrator | + mac = (known after apply) 2026-04-02 00:02:23.472470 | orchestrator | + name = (known after apply) 2026-04-02 00:02:23.472474 | orchestrator | + port = (known after apply) 2026-04-02 00:02:23.472477 | orchestrator | + uuid = (known after apply) 2026-04-02 00:02:23.472481 | orchestrator | } 2026-04-02 00:02:23.472485 | orchestrator | } 2026-04-02 00:02:23.472489 | orchestrator | 2026-04-02 00:02:23.472493 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-04-02 00:02:23.472497 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-02 00:02:23.472501 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-02 00:02:23.472505 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-02 00:02:23.472509 | orchestrator | + all_metadata = (known after apply) 2026-04-02 00:02:23.472512 | orchestrator | + all_tags = (known after apply) 2026-04-02 00:02:23.472516 | orchestrator | + availability_zone = "nova" 2026-04-02 00:02:23.472520 | orchestrator | + config_drive = true 2026-04-02 00:02:23.472524 | orchestrator | + created = (known after apply) 2026-04-02 00:02:23.472528 | orchestrator | + flavor_id = (known after apply) 2026-04-02 00:02:23.472531 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-02 00:02:23.472535 | orchestrator | + force_delete = false 2026-04-02 00:02:23.472539 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-02 00:02:23.472543 | orchestrator | + id = (known after apply) 2026-04-02 00:02:23.472547 | orchestrator | + image_id = (known after apply) 2026-04-02 00:02:23.472550 | orchestrator | + image_name = (known after apply) 2026-04-02 00:02:23.472554 | orchestrator | + key_pair = "testbed" 2026-04-02 00:02:23.472558 | orchestrator | + name = "testbed-node-4" 2026-04-02 00:02:23.472562 | orchestrator | + power_state = "active" 2026-04-02 00:02:23.472566 | orchestrator | + region = (known after apply) 2026-04-02 00:02:23.472569 | orchestrator | + security_groups = (known after apply) 2026-04-02 00:02:23.472573 | orchestrator | + stop_before_destroy = false 2026-04-02 00:02:23.472577 | orchestrator | + updated = (known after apply) 2026-04-02 00:02:23.472581 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-02 00:02:23.472585 | orchestrator | 2026-04-02 00:02:23.472589 | orchestrator | + block_device { 2026-04-02 00:02:23.472592 | orchestrator | + boot_index = 0 2026-04-02 00:02:23.472596 | orchestrator | + delete_on_termination = false 2026-04-02 00:02:23.472600 | orchestrator | + destination_type = "volume" 2026-04-02 00:02:23.472604 | orchestrator | + multiattach = false 2026-04-02 00:02:23.472608 | orchestrator | + source_type = "volume" 2026-04-02 00:02:23.472612 | orchestrator | + uuid = (known after apply) 2026-04-02 00:02:23.472615 | orchestrator | } 2026-04-02 00:02:23.472619 | orchestrator | 2026-04-02 00:02:23.472623 | orchestrator | + network { 2026-04-02 00:02:23.472627 | orchestrator | + access_network = false 2026-04-02 00:02:23.472631 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-02 00:02:23.472635 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-02 00:02:23.472638 | orchestrator | + mac = (known after apply) 2026-04-02 00:02:23.472642 | orchestrator | + name = (known after apply) 2026-04-02 00:02:23.472646 | orchestrator | + port = (known after apply) 2026-04-02 00:02:23.472653 | orchestrator | + uuid = (known after apply) 2026-04-02 00:02:23.472658 | orchestrator | } 2026-04-02 00:02:23.472661 | orchestrator | } 2026-04-02 00:02:23.472668 | orchestrator | 2026-04-02 00:02:23.472672 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-04-02 00:02:23.472676 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-02 00:02:23.472680 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-02 00:02:23.472683 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-02 00:02:23.472687 | orchestrator | + all_metadata = (known after apply) 2026-04-02 00:02:23.472691 | orchestrator | + all_tags = (known after apply) 2026-04-02 00:02:23.472695 | orchestrator | + availability_zone = "nova" 2026-04-02 00:02:23.472699 | orchestrator | + config_drive = true 2026-04-02 00:02:23.472703 | orchestrator | + created = (known after apply) 2026-04-02 00:02:23.472707 | orchestrator | + flavor_id = (known after apply) 2026-04-02 00:02:23.472710 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-02 00:02:23.472714 | orchestrator | + force_delete = false 2026-04-02 00:02:23.472718 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-02 00:02:23.472722 | orchestrator | + id = (known after apply) 2026-04-02 00:02:23.472726 | orchestrator | + image_id = (known after apply) 2026-04-02 00:02:23.472730 | orchestrator | + image_name = (known after apply) 2026-04-02 00:02:23.472733 | orchestrator | + key_pair = "testbed" 2026-04-02 00:02:23.472737 | orchestrator | + name = "testbed-node-5" 2026-04-02 00:02:23.472741 | orchestrator | + power_state = "active" 2026-04-02 00:02:23.472745 | orchestrator | + region = (known after apply) 2026-04-02 00:02:23.472749 | orchestrator | + security_groups = (known after apply) 2026-04-02 00:02:23.472752 | orchestrator | + stop_before_destroy = false 2026-04-02 00:02:23.472756 | orchestrator | + updated = (known after apply) 2026-04-02 00:02:23.472760 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-02 00:02:23.472764 | orchestrator | 2026-04-02 00:02:23.472768 | orchestrator | + block_device { 2026-04-02 00:02:23.472772 | orchestrator | + boot_index = 0 2026-04-02 00:02:23.472775 | orchestrator | + delete_on_termination = false 2026-04-02 00:02:23.472779 | orchestrator | + destination_type = "volume" 2026-04-02 00:02:23.472783 | orchestrator | + multiattach = false 2026-04-02 00:02:23.472787 | orchestrator | + source_type = "volume" 2026-04-02 00:02:23.472791 | orchestrator | + uuid = (known after apply) 2026-04-02 00:02:23.472795 | orchestrator | } 2026-04-02 00:02:23.472798 | orchestrator | 2026-04-02 00:02:23.472802 | orchestrator | + network { 2026-04-02 00:02:23.472806 | orchestrator | + access_network = false 2026-04-02 00:02:23.472810 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-02 00:02:23.472814 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-02 00:02:23.472817 | orchestrator | + mac = (known after apply) 2026-04-02 00:02:23.472821 | orchestrator | + name = (known after apply) 2026-04-02 00:02:23.472825 | orchestrator | + port = (known after apply) 2026-04-02 00:02:23.472829 | orchestrator | + uuid = (known after apply) 2026-04-02 00:02:23.472833 | orchestrator | } 2026-04-02 00:02:23.472837 | orchestrator | } 2026-04-02 00:02:23.472841 | orchestrator | 2026-04-02 00:02:23.472844 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-04-02 00:02:23.472848 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-04-02 00:02:23.472852 | orchestrator | + fingerprint = (known after apply) 2026-04-02 00:02:23.472856 | orchestrator | + id = (known after apply) 2026-04-02 00:02:23.472860 | orchestrator | + name = "testbed" 2026-04-02 00:02:23.472863 | orchestrator | + private_key = (sensitive value) 2026-04-02 00:02:23.472867 | orchestrator | + public_key = (known after apply) 2026-04-02 00:02:23.472871 | orchestrator | + region = (known after apply) 2026-04-02 00:02:23.472875 | orchestrator | + user_id = (known after apply) 2026-04-02 00:02:23.472879 | orchestrator | } 2026-04-02 00:02:23.472882 | orchestrator | 2026-04-02 00:02:23.472886 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-04-02 00:02:23.472890 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-02 00:02:23.472897 | orchestrator | + device = (known after apply) 2026-04-02 00:02:23.472901 | orchestrator | + id = (known after apply) 2026-04-02 00:02:23.472904 | orchestrator | + instance_id = (known after apply) 2026-04-02 00:02:23.472908 | orchestrator | + region = (known after apply) 2026-04-02 00:02:23.472915 | orchestrator | + volume_id = (known after apply) 2026-04-02 00:02:23.472918 | orchestrator | } 2026-04-02 00:02:23.472922 | orchestrator | 2026-04-02 00:02:23.472926 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-04-02 00:02:23.472930 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-02 00:02:23.472934 | orchestrator | + device = (known after apply) 2026-04-02 00:02:23.472938 | orchestrator | + id = (known after apply) 2026-04-02 00:02:23.472942 | orchestrator | + instance_id = (known after apply) 2026-04-02 00:02:23.472945 | orchestrator | + region = (known after apply) 2026-04-02 00:02:23.472949 | orchestrator | + volume_id = (known after apply) 2026-04-02 00:02:23.472953 | orchestrator | } 2026-04-02 00:02:23.472957 | orchestrator | 2026-04-02 00:02:23.472961 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-04-02 00:02:23.472964 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-02 00:02:23.472968 | orchestrator | + device = (known after apply) 2026-04-02 00:02:23.472972 | orchestrator | + id = (known after apply) 2026-04-02 00:02:23.472976 | orchestrator | + instance_id = (known after apply) 2026-04-02 00:02:23.472980 | orchestrator | + region = (known after apply) 2026-04-02 00:02:23.472984 | orchestrator | + volume_id = (known after apply) 2026-04-02 00:02:23.472987 | orchestrator | } 2026-04-02 00:02:23.472991 | orchestrator | 2026-04-02 00:02:23.472995 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-04-02 00:02:23.472999 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-02 00:02:23.473003 | orchestrator | + device = (known after apply) 2026-04-02 00:02:23.473007 | orchestrator | + id = (known after apply) 2026-04-02 00:02:23.473010 | orchestrator | + instance_id = (known after apply) 2026-04-02 00:02:23.473014 | orchestrator | + region = (known after apply) 2026-04-02 00:02:23.473018 | orchestrator | + volume_id = (known after apply) 2026-04-02 00:02:23.473022 | orchestrator | } 2026-04-02 00:02:23.473026 | orchestrator | 2026-04-02 00:02:23.473029 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-04-02 00:02:23.473033 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-02 00:02:23.473037 | orchestrator | + device = (known after apply) 2026-04-02 00:02:23.473041 | orchestrator | + id = (known after apply) 2026-04-02 00:02:23.473045 | orchestrator | + instance_id = (known after apply) 2026-04-02 00:02:23.473049 | orchestrator | + region = (known after apply) 2026-04-02 00:02:23.473055 | orchestrator | + volume_id = (known after apply) 2026-04-02 00:02:23.473059 | orchestrator | } 2026-04-02 00:02:23.473063 | orchestrator | 2026-04-02 00:02:23.473067 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-04-02 00:02:23.473071 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-02 00:02:23.473075 | orchestrator | + device = (known after apply) 2026-04-02 00:02:23.473078 | orchestrator | + id = (known after apply) 2026-04-02 00:02:23.473082 | orchestrator | + instance_id = (known after apply) 2026-04-02 00:02:23.473086 | orchestrator | + region = (known after apply) 2026-04-02 00:02:23.473090 | orchestrator | + volume_id = (known after apply) 2026-04-02 00:02:23.473094 | orchestrator | } 2026-04-02 00:02:23.473097 | orchestrator | 2026-04-02 00:02:23.473101 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-04-02 00:02:23.473105 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-02 00:02:23.473109 | orchestrator | + device = (known after apply) 2026-04-02 00:02:23.473113 | orchestrator | + id = (known after apply) 2026-04-02 00:02:23.473117 | orchestrator | + instance_id = (known after apply) 2026-04-02 00:02:23.473120 | orchestrator | + region = (known after apply) 2026-04-02 00:02:23.473127 | orchestrator | + volume_id = (known after apply) 2026-04-02 00:02:23.473131 | orchestrator | } 2026-04-02 00:02:23.473135 | orchestrator | 2026-04-02 00:02:23.473138 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-04-02 00:02:23.473142 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-02 00:02:23.473146 | orchestrator | + device = (known after apply) 2026-04-02 00:02:23.473150 | orchestrator | + id = (known after apply) 2026-04-02 00:02:23.473154 | orchestrator | + instance_id = (known after apply) 2026-04-02 00:02:23.473158 | orchestrator | + region = (known after apply) 2026-04-02 00:02:23.473161 | orchestrator | + volume_id = (known after apply) 2026-04-02 00:02:23.473165 | orchestrator | } 2026-04-02 00:02:23.473169 | orchestrator | 2026-04-02 00:02:23.473173 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-04-02 00:02:23.473177 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-02 00:02:23.473181 | orchestrator | + device = (known after apply) 2026-04-02 00:02:23.473184 | orchestrator | + id = (known after apply) 2026-04-02 00:02:23.473188 | orchestrator | + instance_id = (known after apply) 2026-04-02 00:02:23.473192 | orchestrator | + region = (known after apply) 2026-04-02 00:02:23.473196 | orchestrator | + volume_id = (known after apply) 2026-04-02 00:02:23.473200 | orchestrator | } 2026-04-02 00:02:23.473204 | orchestrator | 2026-04-02 00:02:23.473207 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-04-02 00:02:23.473212 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-04-02 00:02:23.473216 | orchestrator | + fixed_ip = (known after apply) 2026-04-02 00:02:23.473220 | orchestrator | + floating_ip = (known after apply) 2026-04-02 00:02:23.473223 | orchestrator | + id = (known after apply) 2026-04-02 00:02:23.473227 | orchestrator | + port_id = (known after apply) 2026-04-02 00:02:23.473231 | orchestrator | + region = (known after apply) 2026-04-02 00:02:23.473235 | orchestrator | } 2026-04-02 00:02:23.473239 | orchestrator | 2026-04-02 00:02:23.473242 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-04-02 00:02:23.473246 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-04-02 00:02:23.473250 | orchestrator | + address = (known after apply) 2026-04-02 00:02:23.473254 | orchestrator | + all_tags = (known after apply) 2026-04-02 00:02:23.473260 | orchestrator | + dns_domain = (known after apply) 2026-04-02 00:02:23.473264 | orchestrator | + dns_name = (known after apply) 2026-04-02 00:02:23.473268 | orchestrator | + fixed_ip = (known after apply) 2026-04-02 00:02:23.473272 | orchestrator | + id = (known after apply) 2026-04-02 00:02:23.473275 | orchestrator | + pool = "public" 2026-04-02 00:02:23.473279 | orchestrator | + port_id = (known after apply) 2026-04-02 00:02:23.473283 | orchestrator | + region = (known after apply) 2026-04-02 00:02:23.473287 | orchestrator | + subnet_id = (known after apply) 2026-04-02 00:02:23.473291 | orchestrator | + tenant_id = (known after apply) 2026-04-02 00:02:23.473295 | orchestrator | } 2026-04-02 00:02:23.473298 | orchestrator | 2026-04-02 00:02:23.473302 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-04-02 00:02:23.473306 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-04-02 00:02:23.473310 | orchestrator | + admin_state_up = (known after apply) 2026-04-02 00:02:23.473314 | orchestrator | + all_tags = (known after apply) 2026-04-02 00:02:23.473318 | orchestrator | + availability_zone_hints = [ 2026-04-02 00:02:23.473322 | orchestrator | + "nova", 2026-04-02 00:02:23.473336 | orchestrator | ] 2026-04-02 00:02:23.473340 | orchestrator | + dns_domain = (known after apply) 2026-04-02 00:02:23.473344 | orchestrator | + external = (known after apply) 2026-04-02 00:02:23.473347 | orchestrator | + id = (known after apply) 2026-04-02 00:02:23.473351 | orchestrator | + mtu = (known after apply) 2026-04-02 00:02:23.473355 | orchestrator | + name = "net-testbed-management" 2026-04-02 00:02:23.473359 | orchestrator | + port_security_enabled = (known after apply) 2026-04-02 00:02:23.473366 | orchestrator | + qos_policy_id = (known after apply) 2026-04-02 00:02:23.473370 | orchestrator | + region = (known after apply) 2026-04-02 00:02:23.473373 | orchestrator | + shared = (known after apply) 2026-04-02 00:02:23.473377 | orchestrator | + tenant_id = (known after apply) 2026-04-02 00:02:23.473381 | orchestrator | + transparent_vlan = (known after apply) 2026-04-02 00:02:23.473385 | orchestrator | 2026-04-02 00:02:23.473389 | orchestrator | + segments (known after apply) 2026-04-02 00:02:23.473393 | orchestrator | } 2026-04-02 00:02:23.473396 | orchestrator | 2026-04-02 00:02:23.473400 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-04-02 00:02:23.473404 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-04-02 00:02:23.473408 | orchestrator | + admin_state_up = (known after apply) 2026-04-02 00:02:23.473412 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-02 00:02:23.473415 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-02 00:02:23.473419 | orchestrator | + all_tags = (known after apply) 2026-04-02 00:02:23.473423 | orchestrator | + device_id = (known after apply) 2026-04-02 00:02:23.473427 | orchestrator | + device_owner = (known after apply) 2026-04-02 00:02:23.473431 | orchestrator | + dns_assignment = (known after apply) 2026-04-02 00:02:23.473435 | orchestrator | + dns_name = (known after apply) 2026-04-02 00:02:23.473441 | orchestrator | + id = (known after apply) 2026-04-02 00:02:23.473445 | orchestrator | + mac_address = (known after apply) 2026-04-02 00:02:23.473449 | orchestrator | + network_id = (known after apply) 2026-04-02 00:02:23.473453 | orchestrator | + port_security_enabled = (known after apply) 2026-04-02 00:02:23.473457 | orchestrator | + qos_policy_id = (known after apply) 2026-04-02 00:02:23.473460 | orchestrator | + region = (known after apply) 2026-04-02 00:02:23.473464 | orchestrator | + security_group_ids = (known after apply) 2026-04-02 00:02:23.473468 | orchestrator | + tenant_id = (known after apply) 2026-04-02 00:02:23.473472 | orchestrator | 2026-04-02 00:02:23.473476 | orchestrator | + allowed_address_pairs { 2026-04-02 00:02:23.473480 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-02 00:02:23.473483 | orchestrator | } 2026-04-02 00:02:23.473487 | orchestrator | 2026-04-02 00:02:23.473491 | orchestrator | + binding (known after apply) 2026-04-02 00:02:23.473495 | orchestrator | 2026-04-02 00:02:23.473499 | orchestrator | + fixed_ip { 2026-04-02 00:02:23.473503 | orchestrator | + ip_address = "192.168.16.5" 2026-04-02 00:02:23.473507 | orchestrator | + subnet_id = (known after apply) 2026-04-02 00:02:23.473510 | orchestrator | } 2026-04-02 00:02:23.473514 | orchestrator | } 2026-04-02 00:02:23.473518 | orchestrator | 2026-04-02 00:02:23.473522 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-04-02 00:02:23.473526 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-02 00:02:23.473530 | orchestrator | + admin_state_up = (known after apply) 2026-04-02 00:02:23.473533 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-02 00:02:23.473537 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-02 00:02:23.473541 | orchestrator | + all_tags = (known after apply) 2026-04-02 00:02:23.473545 | orchestrator | + device_id = (known after apply) 2026-04-02 00:02:23.473549 | orchestrator | + device_owner = (known after apply) 2026-04-02 00:02:23.473553 | orchestrator | + dns_assignment = (known after apply) 2026-04-02 00:02:23.473556 | orchestrator | + dns_name = (known after apply) 2026-04-02 00:02:23.473560 | orchestrator | + id = (known after apply) 2026-04-02 00:02:23.473564 | orchestrator | + mac_address = (known after apply) 2026-04-02 00:02:23.473568 | orchestrator | + network_id = (known after apply) 2026-04-02 00:02:23.473572 | orchestrator | + port_security_enabled = (known after apply) 2026-04-02 00:02:23.473575 | orchestrator | + qos_policy_id = (known after apply) 2026-04-02 00:02:23.473579 | orchestrator | + region = (known after apply) 2026-04-02 00:02:23.473586 | orchestrator | + security_group_ids = (known after apply) 2026-04-02 00:02:23.473589 | orchestrator | + tenant_id = (known after apply) 2026-04-02 00:02:23.473593 | orchestrator | 2026-04-02 00:02:23.473597 | orchestrator | + allowed_address_pairs { 2026-04-02 00:02:23.473601 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-02 00:02:23.473605 | orchestrator | } 2026-04-02 00:02:23.473609 | orchestrator | + allowed_address_pairs { 2026-04-02 00:02:23.473612 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-02 00:02:23.473616 | orchestrator | } 2026-04-02 00:02:23.473620 | orchestrator | + allowed_address_pairs { 2026-04-02 00:02:23.473624 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-02 00:02:23.473628 | orchestrator | } 2026-04-02 00:02:23.473632 | orchestrator | 2026-04-02 00:02:23.473635 | orchestrator | + binding (known after apply) 2026-04-02 00:02:23.473639 | orchestrator | 2026-04-02 00:02:23.473643 | orchestrator | + fixed_ip { 2026-04-02 00:02:23.473647 | orchestrator | + ip_address = "192.168.16.10" 2026-04-02 00:02:23.473651 | orchestrator | + subnet_id = (known after apply) 2026-04-02 00:02:23.473655 | orchestrator | } 2026-04-02 00:02:23.473658 | orchestrator | } 2026-04-02 00:02:23.473662 | orchestrator | 2026-04-02 00:02:23.473666 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-04-02 00:02:23.473670 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-02 00:02:23.473676 | orchestrator | + admin_state_up = (known after apply) 2026-04-02 00:02:23.473680 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-02 00:02:23.473684 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-02 00:02:23.473688 | orchestrator | + all_tags = (known after apply) 2026-04-02 00:02:23.473692 | orchestrator | + device_id = (known after apply) 2026-04-02 00:02:23.473696 | orchestrator | + device_owner = (known after apply) 2026-04-02 00:02:23.473700 | orchestrator | + dns_assignment = (known after apply) 2026-04-02 00:02:23.473703 | orchestrator | + dns_name = (known after apply) 2026-04-02 00:02:23.473707 | orchestrator | + id = (known after apply) 2026-04-02 00:02:23.473711 | orchestrator | + mac_address = (known after apply) 2026-04-02 00:02:23.473715 | orchestrator | + network_id = (known after apply) 2026-04-02 00:02:23.473719 | orchestrator | + port_security_enabled = (known after apply) 2026-04-02 00:02:23.473723 | orchestrator | + qos_policy_id = (known after apply) 2026-04-02 00:02:23.473726 | orchestrator | + region = (known after apply) 2026-04-02 00:02:23.473730 | orchestrator | + security_group_ids = (known after apply) 2026-04-02 00:02:23.473734 | orchestrator | + tenant_id = (known after apply) 2026-04-02 00:02:23.473738 | orchestrator | 2026-04-02 00:02:23.473742 | orchestrator | + allowed_address_pairs { 2026-04-02 00:02:23.473746 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-02 00:02:23.473750 | orchestrator | } 2026-04-02 00:02:23.473753 | orchestrator | + allowed_address_pairs { 2026-04-02 00:02:23.473757 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-02 00:02:23.473761 | orchestrator | } 2026-04-02 00:02:23.473765 | orchestrator | + allowed_address_pairs { 2026-04-02 00:02:23.473769 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-02 00:02:23.473773 | orchestrator | } 2026-04-02 00:02:23.473776 | orchestrator | 2026-04-02 00:02:23.473780 | orchestrator | + binding (known after apply) 2026-04-02 00:02:23.473784 | orchestrator | 2026-04-02 00:02:23.473788 | orchestrator | + fixed_ip { 2026-04-02 00:02:23.473792 | orchestrator | + ip_address = "192.168.16.11" 2026-04-02 00:02:23.473796 | orchestrator | + subnet_id = (known after apply) 2026-04-02 00:02:23.473799 | orchestrator | } 2026-04-02 00:02:23.473803 | orchestrator | } 2026-04-02 00:02:23.473807 | orchestrator | 2026-04-02 00:02:23.473811 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-04-02 00:02:23.473815 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-02 00:02:23.473819 | orchestrator | + admin_state_up = (known after apply) 2026-04-02 00:02:23.473823 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-02 00:02:23.473826 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-02 00:02:23.473830 | orchestrator | + all_tags = (known after apply) 2026-04-02 00:02:23.473837 | orchestrator | + device_id = (known after apply) 2026-04-02 00:02:23.473841 | orchestrator | + device_owner = (known after apply) 2026-04-02 00:02:23.473845 | orchestrator | + dns_assignment = (known after apply) 2026-04-02 00:02:23.473849 | orchestrator | + dns_name = (known after apply) 2026-04-02 00:02:23.473853 | orchestrator | + id = (known after apply) 2026-04-02 00:02:23.473859 | orchestrator | + mac_address = (known after apply) 2026-04-02 00:02:23.473863 | orchestrator | + network_id = (known after apply) 2026-04-02 00:02:23.473867 | orchestrator | + port_security_enabled = (known after apply) 2026-04-02 00:02:23.473871 | orchestrator | + qos_policy_id = (known after apply) 2026-04-02 00:02:23.473875 | orchestrator | + region = (known after apply) 2026-04-02 00:02:23.473878 | orchestrator | + security_group_ids = (known after apply) 2026-04-02 00:02:23.473882 | orchestrator | + tenant_id = (known after apply) 2026-04-02 00:02:23.473886 | orchestrator | 2026-04-02 00:02:23.473890 | orchestrator | + allowed_address_pairs { 2026-04-02 00:02:23.473894 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-02 00:02:23.473898 | orchestrator | } 2026-04-02 00:02:23.473902 | orchestrator | + allowed_address_pairs { 2026-04-02 00:02:23.473905 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-02 00:02:23.473909 | orchestrator | } 2026-04-02 00:02:23.473913 | orchestrator | + allowed_address_pairs { 2026-04-02 00:02:23.473917 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-02 00:02:23.473921 | orchestrator | } 2026-04-02 00:02:23.473925 | orchestrator | 2026-04-02 00:02:23.473929 | orchestrator | + binding (known after apply) 2026-04-02 00:02:23.473933 | orchestrator | 2026-04-02 00:02:23.473936 | orchestrator | + fixed_ip { 2026-04-02 00:02:23.473940 | orchestrator | + ip_address = "192.168.16.12" 2026-04-02 00:02:23.473944 | orchestrator | + subnet_id = (known after apply) 2026-04-02 00:02:23.473948 | orchestrator | } 2026-04-02 00:02:23.473952 | orchestrator | } 2026-04-02 00:02:23.473955 | orchestrator | 2026-04-02 00:02:23.473959 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-04-02 00:02:23.473963 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-02 00:02:23.473967 | orchestrator | + admin_state_up = (known after apply) 2026-04-02 00:02:23.473971 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-02 00:02:23.473975 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-02 00:02:23.473979 | orchestrator | + all_tags = (known after apply) 2026-04-02 00:02:23.473983 | orchestrator | + device_id = (known after apply) 2026-04-02 00:02:23.473986 | orchestrator | + device_owner = (known after apply) 2026-04-02 00:02:23.473990 | orchestrator | + dns_assignment = (known after apply) 2026-04-02 00:02:23.473994 | orchestrator | + dns_name = (known after apply) 2026-04-02 00:02:23.473998 | orchestrator | + id = (known after apply) 2026-04-02 00:02:23.474002 | orchestrator | + mac_address = (known after apply) 2026-04-02 00:02:23.474006 | orchestrator | + network_id = (known after apply) 2026-04-02 00:02:23.474010 | orchestrator | + port_security_enabled = (known after apply) 2026-04-02 00:02:23.474055 | orchestrator | + qos_policy_id = (known after apply) 2026-04-02 00:02:23.474060 | orchestrator | + region = (known after apply) 2026-04-02 00:02:23.474064 | orchestrator | + security_group_ids = (known after apply) 2026-04-02 00:02:23.474067 | orchestrator | + tenant_id = (known after apply) 2026-04-02 00:02:23.474071 | orchestrator | 2026-04-02 00:02:23.474075 | orchestrator | + allowed_address_pairs { 2026-04-02 00:02:23.474079 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-02 00:02:23.474083 | orchestrator | } 2026-04-02 00:02:23.474087 | orchestrator | + allowed_address_pairs { 2026-04-02 00:02:23.474091 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-02 00:02:23.474094 | orchestrator | } 2026-04-02 00:02:23.474098 | orchestrator | + allowed_address_pairs { 2026-04-02 00:02:23.474102 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-02 00:02:23.474106 | orchestrator | } 2026-04-02 00:02:23.474110 | orchestrator | 2026-04-02 00:02:23.474117 | orchestrator | + binding (known after apply) 2026-04-02 00:02:23.474121 | orchestrator | 2026-04-02 00:02:23.474124 | orchestrator | + fixed_ip { 2026-04-02 00:02:23.474128 | orchestrator | + ip_address = "192.168.16.13" 2026-04-02 00:02:23.474132 | orchestrator | + subnet_id = (known after apply) 2026-04-02 00:02:23.474136 | orchestrator | } 2026-04-02 00:02:23.474139 | orchestrator | } 2026-04-02 00:02:23.474143 | orchestrator | 2026-04-02 00:02:23.474147 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-04-02 00:02:23.474151 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-02 00:02:23.474155 | orchestrator | + admin_state_up = (known after apply) 2026-04-02 00:02:23.474159 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-02 00:02:23.474162 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-02 00:02:23.474166 | orchestrator | + all_tags = (known after apply) 2026-04-02 00:02:23.474170 | orchestrator | + device_id = (known after apply) 2026-04-02 00:02:23.474174 | orchestrator | + device_owner = (known after apply) 2026-04-02 00:02:23.474178 | orchestrator | + dns_assignment = (known after apply) 2026-04-02 00:02:23.474182 | orchestrator | + dns_name = (known after apply) 2026-04-02 00:02:23.474190 | orchestrator | + id = (known after apply) 2026-04-02 00:02:23.474194 | orchestrator | + mac_address = (known after apply) 2026-04-02 00:02:23.474198 | orchestrator | + network_id = (known after apply) 2026-04-02 00:02:23.474201 | orchestrator | + port_security_enabled = (known after apply) 2026-04-02 00:02:23.474205 | orchestrator | + qos_policy_id = (known after apply) 2026-04-02 00:02:23.474209 | orchestrator | + region = (known after apply) 2026-04-02 00:02:23.474213 | orchestrator | + security_group_ids = (known after apply) 2026-04-02 00:02:23.474217 | orchestrator | + tenant_id = (known after apply) 2026-04-02 00:02:23.474221 | orchestrator | 2026-04-02 00:02:23.474225 | orchestrator | + allowed_address_pairs { 2026-04-02 00:02:23.474231 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-02 00:02:23.474235 | orchestrator | } 2026-04-02 00:02:23.474239 | orchestrator | + allowed_address_pairs { 2026-04-02 00:02:23.474242 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-02 00:02:23.474246 | orchestrator | } 2026-04-02 00:02:23.474250 | orchestrator | + allowed_address_pairs { 2026-04-02 00:02:23.474254 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-02 00:02:23.474258 | orchestrator | } 2026-04-02 00:02:23.474261 | orchestrator | 2026-04-02 00:02:23.474265 | orchestrator | + binding (known after apply) 2026-04-02 00:02:23.474269 | orchestrator | 2026-04-02 00:02:23.474273 | orchestrator | + fixed_ip { 2026-04-02 00:02:23.474277 | orchestrator | + ip_address = "192.168.16.14" 2026-04-02 00:02:23.474280 | orchestrator | + subnet_id = (known after apply) 2026-04-02 00:02:23.474284 | orchestrator | } 2026-04-02 00:02:23.474288 | orchestrator | } 2026-04-02 00:02:23.474292 | orchestrator | 2026-04-02 00:02:23.474296 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-04-02 00:02:23.474299 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-02 00:02:23.474303 | orchestrator | + admin_state_up = (known after apply) 2026-04-02 00:02:23.474307 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-02 00:02:23.474311 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-02 00:02:23.474315 | orchestrator | + all_tags = (known after apply) 2026-04-02 00:02:23.474319 | orchestrator | + device_id = (known after apply) 2026-04-02 00:02:23.474341 | orchestrator | + device_owner = (known after apply) 2026-04-02 00:02:23.474349 | orchestrator | + dns_assignment = (known after apply) 2026-04-02 00:02:23.474353 | orchestrator | + dns_name = (known after apply) 2026-04-02 00:02:23.474357 | orchestrator | + id = (known after apply) 2026-04-02 00:02:23.474361 | orchestrator | + mac_address = (known after apply) 2026-04-02 00:02:23.474365 | orchestrator | + network_id = (known after apply) 2026-04-02 00:02:23.474368 | orchestrator | + port_security_enabled = (known after apply) 2026-04-02 00:02:23.474372 | orchestrator | + qos_policy_id = (known after apply) 2026-04-02 00:02:23.474379 | orchestrator | + region = (known after apply) 2026-04-02 00:02:23.474383 | orchestrator | + security_group_ids = (known after apply) 2026-04-02 00:02:23.474387 | orchestrator | + tenant_id = (known after apply) 2026-04-02 00:02:23.474391 | orchestrator | 2026-04-02 00:02:23.474395 | orchestrator | + allowed_address_pairs { 2026-04-02 00:02:23.474398 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-02 00:02:23.474402 | orchestrator | } 2026-04-02 00:02:23.474406 | orchestrator | + allowed_address_pairs { 2026-04-02 00:02:23.474410 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-02 00:02:23.474414 | orchestrator | } 2026-04-02 00:02:23.474418 | orchestrator | + allowed_address_pairs { 2026-04-02 00:02:23.474421 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-02 00:02:23.474425 | orchestrator | } 2026-04-02 00:02:23.474429 | orchestrator | 2026-04-02 00:02:23.474433 | orchestrator | + binding (known after apply) 2026-04-02 00:02:23.474437 | orchestrator | 2026-04-02 00:02:23.474441 | orchestrator | + fixed_ip { 2026-04-02 00:02:23.474444 | orchestrator | + ip_address = "192.168.16.15" 2026-04-02 00:02:23.474448 | orchestrator | + subnet_id = (known after apply) 2026-04-02 00:02:23.474452 | orchestrator | } 2026-04-02 00:02:23.474456 | orchestrator | } 2026-04-02 00:02:23.474460 | orchestrator | 2026-04-02 00:02:23.474463 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-04-02 00:02:23.474467 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-04-02 00:02:23.474471 | orchestrator | + force_destroy = false 2026-04-02 00:02:23.474475 | orchestrator | + id = (known after apply) 2026-04-02 00:02:23.474479 | orchestrator | + port_id = (known after apply) 2026-04-02 00:02:23.474483 | orchestrator | + region = (known after apply) 2026-04-02 00:02:23.474486 | orchestrator | + router_id = (known after apply) 2026-04-02 00:02:23.474490 | orchestrator | + subnet_id = (known after apply) 2026-04-02 00:02:23.474494 | orchestrator | } 2026-04-02 00:02:23.474498 | orchestrator | 2026-04-02 00:02:23.474501 | orchestrator | # openstack_networking_router_v2.router will be created 2026-04-02 00:02:23.474505 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-04-02 00:02:23.474509 | orchestrator | + admin_state_up = (known after apply) 2026-04-02 00:02:23.474513 | orchestrator | + all_tags = (known after apply) 2026-04-02 00:02:23.474517 | orchestrator | + availability_zone_hints = [ 2026-04-02 00:02:23.474521 | orchestrator | + "nova", 2026-04-02 00:02:23.474525 | orchestrator | ] 2026-04-02 00:02:23.474528 | orchestrator | + distributed = (known after apply) 2026-04-02 00:02:23.474532 | orchestrator | + enable_snat = (known after apply) 2026-04-02 00:02:23.474536 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-04-02 00:02:23.474540 | orchestrator | + external_qos_policy_id = (known after apply) 2026-04-02 00:02:23.474544 | orchestrator | + id = (known after apply) 2026-04-02 00:02:23.474548 | orchestrator | + name = "testbed" 2026-04-02 00:02:23.474551 | orchestrator | + region = (known after apply) 2026-04-02 00:02:23.474555 | orchestrator | + tenant_id = (known after apply) 2026-04-02 00:02:23.474559 | orchestrator | 2026-04-02 00:02:23.474563 | orchestrator | + external_fixed_ip (known after apply) 2026-04-02 00:02:23.474567 | orchestrator | } 2026-04-02 00:02:23.474571 | orchestrator | 2026-04-02 00:02:23.474574 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-04-02 00:02:23.474579 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-04-02 00:02:23.474582 | orchestrator | + description = "ssh" 2026-04-02 00:02:23.474586 | orchestrator | + direction = "ingress" 2026-04-02 00:02:23.474590 | orchestrator | + ethertype = "IPv4" 2026-04-02 00:02:23.474594 | orchestrator | + id = (known after apply) 2026-04-02 00:02:23.474598 | orchestrator | + port_range_max = 22 2026-04-02 00:02:23.474601 | orchestrator | + port_range_min = 22 2026-04-02 00:02:23.474605 | orchestrator | + protocol = "tcp" 2026-04-02 00:02:23.474609 | orchestrator | + region = (known after apply) 2026-04-02 00:02:23.474616 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-02 00:02:23.474620 | orchestrator | + remote_group_id = (known after apply) 2026-04-02 00:02:23.474624 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-02 00:02:23.474628 | orchestrator | + security_group_id = (known after apply) 2026-04-02 00:02:23.474631 | orchestrator | + tenant_id = (known after apply) 2026-04-02 00:02:23.474635 | orchestrator | } 2026-04-02 00:02:23.474639 | orchestrator | 2026-04-02 00:02:23.474643 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-04-02 00:02:23.474647 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-04-02 00:02:23.474651 | orchestrator | + description = "wireguard" 2026-04-02 00:02:23.474654 | orchestrator | + direction = "ingress" 2026-04-02 00:02:23.474658 | orchestrator | + ethertype = "IPv4" 2026-04-02 00:02:23.474662 | orchestrator | + id = (known after apply) 2026-04-02 00:02:23.474666 | orchestrator | + port_range_max = 51820 2026-04-02 00:02:23.474669 | orchestrator | + port_range_min = 51820 2026-04-02 00:02:23.474673 | orchestrator | + protocol = "udp" 2026-04-02 00:02:23.474677 | orchestrator | + region = (known after apply) 2026-04-02 00:02:23.474681 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-02 00:02:23.474685 | orchestrator | + remote_group_id = (known after apply) 2026-04-02 00:02:23.474688 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-02 00:02:23.474692 | orchestrator | + security_group_id = (known after apply) 2026-04-02 00:02:23.474696 | orchestrator | + tenant_id = (known after apply) 2026-04-02 00:02:23.474700 | orchestrator | } 2026-04-02 00:02:23.474704 | orchestrator | 2026-04-02 00:02:23.474707 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-04-02 00:02:23.474711 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-04-02 00:02:23.474718 | orchestrator | + direction = "ingress" 2026-04-02 00:02:23.474722 | orchestrator | + ethertype = "IPv4" 2026-04-02 00:02:23.474725 | orchestrator | + id = (known after apply) 2026-04-02 00:02:23.474729 | orchestrator | + protocol = "tcp" 2026-04-02 00:02:23.474735 | orchestrator | + region = (known after apply) 2026-04-02 00:02:23.474739 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-02 00:02:23.474743 | orchestrator | + remote_group_id = (known after apply) 2026-04-02 00:02:23.474747 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-04-02 00:02:23.474751 | orchestrator | + security_group_id = (known after apply) 2026-04-02 00:02:23.474755 | orchestrator | + tenant_id = (known after apply) 2026-04-02 00:02:23.474758 | orchestrator | } 2026-04-02 00:02:23.474762 | orchestrator | 2026-04-02 00:02:23.474766 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-04-02 00:02:23.474770 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-04-02 00:02:23.474774 | orchestrator | + direction = "ingress" 2026-04-02 00:02:23.474778 | orchestrator | + ethertype = "IPv4" 2026-04-02 00:02:23.474781 | orchestrator | + id = (known after apply) 2026-04-02 00:02:23.474785 | orchestrator | + protocol = "udp" 2026-04-02 00:02:23.474789 | orchestrator | + region = (known after apply) 2026-04-02 00:02:23.474793 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-02 00:02:23.474797 | orchestrator | + remote_group_id = (known after apply) 2026-04-02 00:02:23.474800 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-04-02 00:02:23.474804 | orchestrator | + security_group_id = (known after apply) 2026-04-02 00:02:23.474808 | orchestrator | + tenant_id = (known after apply) 2026-04-02 00:02:23.474812 | orchestrator | } 2026-04-02 00:02:23.474816 | orchestrator | 2026-04-02 00:02:23.474819 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-04-02 00:02:23.474826 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-04-02 00:02:23.474830 | orchestrator | + direction = "ingress" 2026-04-02 00:02:23.474834 | orchestrator | + ethertype = "IPv4" 2026-04-02 00:02:23.474838 | orchestrator | + id = (known after apply) 2026-04-02 00:02:23.474841 | orchestrator | + protocol = "icmp" 2026-04-02 00:02:23.474845 | orchestrator | + region = (known after apply) 2026-04-02 00:02:23.474849 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-02 00:02:23.474853 | orchestrator | + remote_group_id = (known after apply) 2026-04-02 00:02:23.474857 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-02 00:02:23.474860 | orchestrator | + security_group_id = (known after apply) 2026-04-02 00:02:23.474864 | orchestrator | + tenant_id = (known after apply) 2026-04-02 00:02:23.474868 | orchestrator | } 2026-04-02 00:02:23.474872 | orchestrator | 2026-04-02 00:02:23.474876 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-04-02 00:02:23.474880 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-04-02 00:02:23.474883 | orchestrator | + direction = "ingress" 2026-04-02 00:02:23.474887 | orchestrator | + ethertype = "IPv4" 2026-04-02 00:02:23.474891 | orchestrator | + id = (known after apply) 2026-04-02 00:02:23.474895 | orchestrator | + protocol = "tcp" 2026-04-02 00:02:23.474899 | orchestrator | + region = (known after apply) 2026-04-02 00:02:23.474903 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-02 00:02:23.474906 | orchestrator | + remote_group_id = (known after apply) 2026-04-02 00:02:23.474910 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-02 00:02:23.474914 | orchestrator | + security_group_id = (known after apply) 2026-04-02 00:02:23.474918 | orchestrator | + tenant_id = (known after apply) 2026-04-02 00:02:23.474922 | orchestrator | } 2026-04-02 00:02:23.474925 | orchestrator | 2026-04-02 00:02:23.474929 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-04-02 00:02:23.474933 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-04-02 00:02:23.474937 | orchestrator | + direction = "ingress" 2026-04-02 00:02:23.474941 | orchestrator | + ethertype = "IPv4" 2026-04-02 00:02:23.474944 | orchestrator | + id = (known after apply) 2026-04-02 00:02:23.474948 | orchestrator | + protocol = "udp" 2026-04-02 00:02:23.474952 | orchestrator | + region = (known after apply) 2026-04-02 00:02:23.474956 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-02 00:02:23.474960 | orchestrator | + remote_group_id = (known after apply) 2026-04-02 00:02:23.474964 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-02 00:02:23.474967 | orchestrator | + security_group_id = (known after apply) 2026-04-02 00:02:23.474971 | orchestrator | + tenant_id = (known after apply) 2026-04-02 00:02:23.474975 | orchestrator | } 2026-04-02 00:02:23.474979 | orchestrator | 2026-04-02 00:02:23.474983 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-04-02 00:02:23.474987 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-04-02 00:02:23.474990 | orchestrator | + direction = "ingress" 2026-04-02 00:02:23.474994 | orchestrator | + ethertype = "IPv4" 2026-04-02 00:02:23.474998 | orchestrator | + id = (known after apply) 2026-04-02 00:02:23.475002 | orchestrator | + protocol = "icmp" 2026-04-02 00:02:23.475006 | orchestrator | + region = (known after apply) 2026-04-02 00:02:23.475009 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-02 00:02:23.475013 | orchestrator | + remote_group_id = (known after apply) 2026-04-02 00:02:23.475017 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-02 00:02:23.475021 | orchestrator | + security_group_id = (known after apply) 2026-04-02 00:02:23.475025 | orchestrator | + tenant_id = (known after apply) 2026-04-02 00:02:23.475031 | orchestrator | } 2026-04-02 00:02:23.475035 | orchestrator | 2026-04-02 00:02:23.475039 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-04-02 00:02:23.475043 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-04-02 00:02:23.475046 | orchestrator | + description = "vrrp" 2026-04-02 00:02:23.475050 | orchestrator | + direction = "ingress" 2026-04-02 00:02:23.475054 | orchestrator | + ethertype = "IPv4" 2026-04-02 00:02:23.475058 | orchestrator | + id = (known after apply) 2026-04-02 00:02:23.475062 | orchestrator | + protocol = "112" 2026-04-02 00:02:23.475068 | orchestrator | + region = (known after apply) 2026-04-02 00:02:23.475072 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-02 00:02:23.475076 | orchestrator | + remote_group_id = (known after apply) 2026-04-02 00:02:23.475080 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-02 00:02:23.475083 | orchestrator | + security_group_id = (known after apply) 2026-04-02 00:02:23.475087 | orchestrator | + tenant_id = (known after apply) 2026-04-02 00:02:23.475091 | orchestrator | } 2026-04-02 00:02:23.475095 | orchestrator | 2026-04-02 00:02:23.475099 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-04-02 00:02:23.475102 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-04-02 00:02:23.475106 | orchestrator | + all_tags = (known after apply) 2026-04-02 00:02:23.475110 | orchestrator | + description = "management security group" 2026-04-02 00:02:23.475114 | orchestrator | + id = (known after apply) 2026-04-02 00:02:23.475118 | orchestrator | + name = "testbed-management" 2026-04-02 00:02:23.475121 | orchestrator | + region = (known after apply) 2026-04-02 00:02:23.475125 | orchestrator | + stateful = (known after apply) 2026-04-02 00:02:23.475129 | orchestrator | + tenant_id = (known after apply) 2026-04-02 00:02:23.475133 | orchestrator | } 2026-04-02 00:02:23.475136 | orchestrator | 2026-04-02 00:02:23.475140 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-04-02 00:02:23.475144 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-04-02 00:02:23.475148 | orchestrator | + all_tags = (known after apply) 2026-04-02 00:02:23.475152 | orchestrator | + description = "node security group" 2026-04-02 00:02:23.475156 | orchestrator | + id = (known after apply) 2026-04-02 00:02:23.475159 | orchestrator | + name = "testbed-node" 2026-04-02 00:02:23.475163 | orchestrator | + region = (known after apply) 2026-04-02 00:02:23.475167 | orchestrator | + stateful = (known after apply) 2026-04-02 00:02:23.475171 | orchestrator | + tenant_id = (known after apply) 2026-04-02 00:02:23.475175 | orchestrator | } 2026-04-02 00:02:23.475178 | orchestrator | 2026-04-02 00:02:23.475182 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-04-02 00:02:23.475186 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-04-02 00:02:23.475190 | orchestrator | + all_tags = (known after apply) 2026-04-02 00:02:23.475194 | orchestrator | + cidr = "192.168.16.0/20" 2026-04-02 00:02:23.475197 | orchestrator | + dns_nameservers = [ 2026-04-02 00:02:23.475201 | orchestrator | + "8.8.8.8", 2026-04-02 00:02:23.475205 | orchestrator | + "9.9.9.9", 2026-04-02 00:02:23.475209 | orchestrator | ] 2026-04-02 00:02:23.475213 | orchestrator | + enable_dhcp = true 2026-04-02 00:02:23.475217 | orchestrator | + gateway_ip = (known after apply) 2026-04-02 00:02:23.475223 | orchestrator | + id = (known after apply) 2026-04-02 00:02:23.475227 | orchestrator | + ip_version = 4 2026-04-02 00:02:23.475231 | orchestrator | + ipv6_address_mode = (known after apply) 2026-04-02 00:02:23.475235 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-04-02 00:02:23.475238 | orchestrator | + name = "subnet-testbed-management" 2026-04-02 00:02:23.475242 | orchestrator | + network_id = (known after apply) 2026-04-02 00:02:23.475246 | orchestrator | + no_gateway = false 2026-04-02 00:02:23.475250 | orchestrator | + region = (known after apply) 2026-04-02 00:02:23.475254 | orchestrator | + service_types = (known after apply) 2026-04-02 00:02:23.475260 | orchestrator | + tenant_id = (known after apply) 2026-04-02 00:02:23.475264 | orchestrator | 2026-04-02 00:02:23.475268 | orchestrator | + allocation_pool { 2026-04-02 00:02:23.475272 | orchestrator | + end = "192.168.31.250" 2026-04-02 00:02:23.475275 | orchestrator | + start = "192.168.31.200" 2026-04-02 00:02:23.475279 | orchestrator | } 2026-04-02 00:02:23.475283 | orchestrator | } 2026-04-02 00:02:23.475287 | orchestrator | 2026-04-02 00:02:23.475291 | orchestrator | # terraform_data.image will be created 2026-04-02 00:02:23.475294 | orchestrator | + resource "terraform_data" "image" { 2026-04-02 00:02:23.475298 | orchestrator | + id = (known after apply) 2026-04-02 00:02:23.475302 | orchestrator | + input = "Ubuntu 24.04" 2026-04-02 00:02:23.475306 | orchestrator | + output = (known after apply) 2026-04-02 00:02:23.475309 | orchestrator | } 2026-04-02 00:02:23.475313 | orchestrator | 2026-04-02 00:02:23.475317 | orchestrator | # terraform_data.image_node will be created 2026-04-02 00:02:23.475321 | orchestrator | + resource "terraform_data" "image_node" { 2026-04-02 00:02:23.475335 | orchestrator | + id = (known after apply) 2026-04-02 00:02:23.475339 | orchestrator | + input = "Ubuntu 24.04" 2026-04-02 00:02:23.475343 | orchestrator | + output = (known after apply) 2026-04-02 00:02:23.475346 | orchestrator | } 2026-04-02 00:02:23.475350 | orchestrator | 2026-04-02 00:02:23.475354 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-04-02 00:02:23.475358 | orchestrator | 2026-04-02 00:02:23.475362 | orchestrator | Changes to Outputs: 2026-04-02 00:02:23.475390 | orchestrator | + manager_address = (sensitive value) 2026-04-02 00:02:23.475394 | orchestrator | + private_key = (sensitive value) 2026-04-02 00:02:23.660535 | orchestrator | terraform_data.image: Creating... 2026-04-02 00:02:23.660619 | orchestrator | terraform_data.image_node: Creating... 2026-04-02 00:02:23.660633 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=d1bcff47-8f3b-928c-ad2a-339d2acf77a8] 2026-04-02 00:02:23.660640 | orchestrator | terraform_data.image: Creation complete after 0s [id=6e33ba67-4494-f1a1-3d26-68d386bc38b1] 2026-04-02 00:02:23.664933 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-04-02 00:02:23.670667 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-04-02 00:02:23.692314 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-04-02 00:02:23.715291 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-04-02 00:02:23.716187 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-04-02 00:02:23.717790 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-04-02 00:02:23.726239 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-04-02 00:02:23.736941 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-04-02 00:02:23.739556 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-04-02 00:02:23.741995 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-04-02 00:02:24.171302 | orchestrator | data.openstack_images_image_v2.image: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-04-02 00:02:24.179863 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-04-02 00:02:24.185092 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-04-02 00:02:24.190301 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-04-02 00:02:24.299574 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2026-04-02 00:02:24.304164 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-04-02 00:02:25.293313 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 1s [id=cb7d4aaf-b2e1-43a4-b3b0-1f09b98a8d6a] 2026-04-02 00:02:25.300804 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-04-02 00:02:27.329646 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 3s [id=313a743e-b82e-49a6-b933-92b7a7e896a9] 2026-04-02 00:02:27.334174 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-04-02 00:02:27.336925 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 3s [id=3e41a6ee-2963-4f7f-bd44-4fc104801a1a] 2026-04-02 00:02:27.353716 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-04-02 00:02:27.380963 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 3s [id=01ed39e8-1eff-44a7-98b4-951368397b21] 2026-04-02 00:02:27.390094 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-04-02 00:02:27.428973 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 3s [id=70d9d3d1-3bd9-44af-99af-3d2d8f65a3c3] 2026-04-02 00:02:27.436480 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 3s [id=e9e694a9-9d82-484d-8c29-c125fbbe1161] 2026-04-02 00:02:27.440193 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-04-02 00:02:27.441236 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-04-02 00:02:27.466096 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 3s [id=a19da191-4981-42d2-9779-658e739bce45] 2026-04-02 00:02:27.476917 | orchestrator | local_file.id_rsa_pub: Creating... 2026-04-02 00:02:27.482372 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=5b80d981896eafe81ba001314909fb3c3271b41f] 2026-04-02 00:02:27.489204 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 3s [id=9b931cae-7a06-4f63-bca1-6514ca0f11b4] 2026-04-02 00:02:27.489917 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-04-02 00:02:27.499953 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-04-02 00:02:27.504452 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 4s [id=5e29c7a5-f411-44d3-9f54-46e8ba073aaf] 2026-04-02 00:02:27.504486 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 1s [id=1002bbe355c1137ebde0c7bac5477be13504c5f0] 2026-04-02 00:02:27.527282 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 4s [id=2d38c850-3a2f-4695-a83c-0cf43f012ceb] 2026-04-02 00:02:27.529100 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-04-02 00:02:28.551854 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=f5154497-9cec-455c-af56-604dc2c7d74f] 2026-04-02 00:02:28.556000 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-04-02 00:02:28.701841 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 4s [id=ba7db69a-ec5b-4915-a8ee-e6fab148bd19] 2026-04-02 00:02:30.763105 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 4s [id=0032d447-591d-4b7c-93ad-b7b900e6d05d] 2026-04-02 00:02:30.825523 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 4s [id=8e0fe724-9dc6-457e-9830-68d85bc3f312] 2026-04-02 00:02:30.882312 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 4s [id=213be582-731a-41c2-8309-28c1726af439] 2026-04-02 00:02:30.900870 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 4s [id=d50c10b7-0ee3-49f5-b5db-aaff5f388ee0] 2026-04-02 00:02:30.923904 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 4s [id=f06db598-1059-4957-87c8-4c1fce10345d] 2026-04-02 00:02:30.936541 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 4s [id=924a2fdb-5874-4458-8721-81e9cbcbc15b] 2026-04-02 00:02:31.399136 | orchestrator | openstack_networking_router_v2.router: Creation complete after 2s [id=34ce19d8-6125-43bd-bfea-b8b094cad2db] 2026-04-02 00:02:31.407597 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-04-02 00:02:31.408289 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-04-02 00:02:31.411207 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-04-02 00:02:31.685714 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 1s [id=35fa606c-5a2b-48c8-abf4-324c7da22bd0] 2026-04-02 00:02:31.698588 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-04-02 00:02:31.698708 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-04-02 00:02:31.701900 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-04-02 00:02:31.706071 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-04-02 00:02:31.706114 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-04-02 00:02:31.707088 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-04-02 00:02:31.715603 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-04-02 00:02:31.716102 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-04-02 00:02:31.775956 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 1s [id=d2792d6e-c98b-4efb-bcd8-96f2bc632aa4] 2026-04-02 00:02:31.785864 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-04-02 00:02:31.942090 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 0s [id=52c6c1c4-6edf-49e0-bde8-dd11b481b7c3] 2026-04-02 00:02:31.952473 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-04-02 00:02:32.349143 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=c88e6b91-c417-434c-9052-e16f4532f19e] 2026-04-02 00:02:32.355674 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-04-02 00:02:32.467164 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 0s [id=73144b9d-8a3d-478a-998e-4a7552fd086a] 2026-04-02 00:02:32.472793 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-04-02 00:02:32.766126 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=97a7aa63-bea3-4714-b061-6d07cb7688db] 2026-04-02 00:02:32.776141 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-04-02 00:02:32.778668 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=77abafef-e48c-48d0-b36a-74bd64ef0dec] 2026-04-02 00:02:32.781672 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=b45e1e3e-8f2c-4401-a2c4-f6731ca138cb] 2026-04-02 00:02:32.785521 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-04-02 00:02:32.786764 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-04-02 00:02:32.791836 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=15b43896-05c9-4b0b-89c9-6a014ef6192e] 2026-04-02 00:02:32.796629 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-04-02 00:02:32.819325 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=0d4bee37-e559-4f33-a779-3aa2b25d6a21] 2026-04-02 00:02:32.848991 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=a4fda57c-3780-4cbb-8d7a-950c0f4fbb5e] 2026-04-02 00:02:32.948998 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=619f5fa8-e0e9-465f-a8f9-15326e3a6700] 2026-04-02 00:02:33.120599 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 0s [id=e22c06c3-2d93-4b5d-a6ee-69f622380eb8] 2026-04-02 00:02:33.158694 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=0ad7dafa-5987-41bb-8544-7899f6f92bda] 2026-04-02 00:02:33.511913 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=36ac0b02-d092-482a-b70d-7a693e2ee407] 2026-04-02 00:02:33.533817 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=11a1a2af-31de-4fd0-845d-3ad2e87cf17a] 2026-04-02 00:02:33.689775 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=7ac63c5d-3558-4772-9999-a60aa1181d2c] 2026-04-02 00:02:34.082620 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 2s [id=e747d327-3862-4b88-9333-e47092a04e50] 2026-04-02 00:02:34.348966 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 3s [id=11079858-ebfb-4366-aebf-55500e59280f] 2026-04-02 00:02:34.369024 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-04-02 00:02:34.380588 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-04-02 00:02:34.395129 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-04-02 00:02:34.396164 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-04-02 00:02:34.397675 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-04-02 00:02:34.402066 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-04-02 00:02:34.403527 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-04-02 00:02:36.641464 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 3s [id=19a2f89a-5390-4831-b37c-c0448078389f] 2026-04-02 00:02:36.650041 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-04-02 00:02:36.658145 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-04-02 00:02:36.659606 | orchestrator | local_file.inventory: Creating... 2026-04-02 00:02:36.664199 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=858abfbc80b64eeca44602185f718c35038a54cd] 2026-04-02 00:02:36.664871 | orchestrator | local_file.inventory: Creation complete after 0s [id=7517754a354fddea3232d7875d988c131b77a747] 2026-04-02 00:02:38.442933 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=19a2f89a-5390-4831-b37c-c0448078389f] 2026-04-02 00:02:44.387983 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-04-02 00:02:44.400252 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-04-02 00:02:44.401399 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-04-02 00:02:44.402514 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-04-02 00:02:44.402567 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-04-02 00:02:44.403599 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-04-02 00:02:54.397225 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-04-02 00:02:54.401539 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-04-02 00:02:54.401613 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-04-02 00:02:54.402765 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-04-02 00:02:54.402865 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-04-02 00:02:54.403963 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-04-02 00:02:55.875986 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 22s [id=fb1499f8-723f-4295-93ce-9133a390b962] 2026-04-02 00:03:04.405315 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2026-04-02 00:03:04.405538 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2026-04-02 00:03:04.405572 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2026-04-02 00:03:04.405584 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2026-04-02 00:03:04.405596 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2026-04-02 00:03:05.043085 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 31s [id=59b153ba-2f5c-4d33-9c2f-e161f431f8c5] 2026-04-02 00:03:05.602427 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 32s [id=59353830-acb4-466d-8bfc-2cc96234f51f] 2026-04-02 00:03:05.655722 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 32s [id=d153b105-3298-4f18-b746-672f5b155bac] 2026-04-02 00:03:05.909637 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 32s [id=8baa991f-e47e-4a1e-bac9-f7c273730153] 2026-04-02 00:03:06.043656 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 32s [id=3f042c98-6e7a-48b9-89a0-b5ae71bc3dc4] 2026-04-02 00:03:06.059754 | orchestrator | null_resource.node_semaphore: Creating... 2026-04-02 00:03:06.068845 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=2657038544519525673] 2026-04-02 00:03:06.077165 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-04-02 00:03:06.082669 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-04-02 00:03:06.084741 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-04-02 00:03:06.087167 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-04-02 00:03:06.089885 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-04-02 00:03:06.092018 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-04-02 00:03:06.097059 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-04-02 00:03:06.103213 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-04-02 00:03:06.108846 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-04-02 00:03:06.119075 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-04-02 00:03:09.514750 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 4s [id=59b153ba-2f5c-4d33-9c2f-e161f431f8c5/a19da191-4981-42d2-9779-658e739bce45] 2026-04-02 00:03:09.545908 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 4s [id=3f042c98-6e7a-48b9-89a0-b5ae71bc3dc4/01ed39e8-1eff-44a7-98b4-951368397b21] 2026-04-02 00:03:09.548142 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 4s [id=8baa991f-e47e-4a1e-bac9-f7c273730153/70d9d3d1-3bd9-44af-99af-3d2d8f65a3c3] 2026-04-02 00:03:15.572665 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 10s [id=3f042c98-6e7a-48b9-89a0-b5ae71bc3dc4/313a743e-b82e-49a6-b933-92b7a7e896a9] 2026-04-02 00:03:15.628321 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 10s [id=59b153ba-2f5c-4d33-9c2f-e161f431f8c5/e9e694a9-9d82-484d-8c29-c125fbbe1161] 2026-04-02 00:03:15.663474 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 10s [id=8baa991f-e47e-4a1e-bac9-f7c273730153/5e29c7a5-f411-44d3-9f54-46e8ba073aaf] 2026-04-02 00:03:15.667007 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 10s [id=3f042c98-6e7a-48b9-89a0-b5ae71bc3dc4/3e41a6ee-2963-4f7f-bd44-4fc104801a1a] 2026-04-02 00:03:15.697688 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 10s [id=8baa991f-e47e-4a1e-bac9-f7c273730153/9b931cae-7a06-4f63-bca1-6514ca0f11b4] 2026-04-02 00:03:15.712814 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 10s [id=59b153ba-2f5c-4d33-9c2f-e161f431f8c5/2d38c850-3a2f-4695-a83c-0cf43f012ceb] 2026-04-02 00:03:16.119552 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-04-02 00:03:26.119708 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-04-02 00:03:26.440552 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=b5edbbdf-65c5-42bc-98e5-717165f6e350] 2026-04-02 00:03:26.459528 | orchestrator | 2026-04-02 00:03:26.459608 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-04-02 00:03:26.459615 | orchestrator | 2026-04-02 00:03:26.459620 | orchestrator | Outputs: 2026-04-02 00:03:26.459624 | orchestrator | 2026-04-02 00:03:26.459635 | orchestrator | manager_address = 2026-04-02 00:03:26.459640 | orchestrator | private_key = 2026-04-02 00:03:26.847292 | orchestrator | ok: Runtime: 0:01:10.458750 2026-04-02 00:03:26.879428 | 2026-04-02 00:03:26.879572 | TASK [Create infrastructure (stable)] 2026-04-02 00:03:27.413049 | orchestrator | skipping: Conditional result was False 2026-04-02 00:03:27.435452 | 2026-04-02 00:03:27.435626 | TASK [Fetch manager address] 2026-04-02 00:03:27.912490 | orchestrator | ok 2026-04-02 00:03:27.925644 | 2026-04-02 00:03:27.925838 | TASK [Set manager_host address] 2026-04-02 00:03:28.003991 | orchestrator | ok 2026-04-02 00:03:28.013206 | 2026-04-02 00:03:28.013338 | LOOP [Update ansible collections] 2026-04-02 00:03:29.030689 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-04-02 00:03:29.031046 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-02 00:03:29.031098 | orchestrator | Starting galaxy collection install process 2026-04-02 00:03:29.031130 | orchestrator | Process install dependency map 2026-04-02 00:03:29.031158 | orchestrator | Starting collection install process 2026-04-02 00:03:29.031183 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed04/.ansible/collections/ansible_collections/osism/commons' 2026-04-02 00:03:29.031214 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed04/.ansible/collections/ansible_collections/osism/commons 2026-04-02 00:03:29.031249 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-04-02 00:03:29.031302 | orchestrator | ok: Item: commons Runtime: 0:00:00.672649 2026-04-02 00:03:30.195782 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-04-02 00:03:30.195932 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-02 00:03:30.195978 | orchestrator | Starting galaxy collection install process 2026-04-02 00:03:30.196012 | orchestrator | Process install dependency map 2026-04-02 00:03:30.196044 | orchestrator | Starting collection install process 2026-04-02 00:03:30.196160 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed04/.ansible/collections/ansible_collections/osism/services' 2026-04-02 00:03:30.196193 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed04/.ansible/collections/ansible_collections/osism/services 2026-04-02 00:03:30.196287 | orchestrator | osism.services:999.0.0 was installed successfully 2026-04-02 00:03:30.196481 | orchestrator | ok: Item: services Runtime: 0:00:00.810186 2026-04-02 00:03:30.219107 | 2026-04-02 00:03:30.219252 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-04-02 00:03:40.853404 | orchestrator | ok 2026-04-02 00:03:40.864473 | 2026-04-02 00:03:40.864616 | TASK [Wait a little longer for the manager so that everything is ready] 2026-04-02 00:04:40.909535 | orchestrator | ok 2026-04-02 00:04:40.921305 | 2026-04-02 00:04:40.921446 | TASK [Fetch manager ssh hostkey] 2026-04-02 00:04:42.496984 | orchestrator | Output suppressed because no_log was given 2026-04-02 00:04:42.512993 | 2026-04-02 00:04:42.513198 | TASK [Get ssh keypair from terraform environment] 2026-04-02 00:04:43.057655 | orchestrator | ok: Runtime: 0:00:00.007662 2026-04-02 00:04:43.074191 | 2026-04-02 00:04:43.074652 | TASK [Point out that the following task takes some time and does not give any output] 2026-04-02 00:04:43.127027 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-04-02 00:04:43.143125 | 2026-04-02 00:04:43.143532 | TASK [Run manager part 0] 2026-04-02 00:04:44.224197 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-02 00:04:44.281388 | orchestrator | 2026-04-02 00:04:44.281467 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-04-02 00:04:44.281475 | orchestrator | 2026-04-02 00:04:44.281492 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-04-02 00:04:46.056230 | orchestrator | ok: [testbed-manager] 2026-04-02 00:04:46.056299 | orchestrator | 2026-04-02 00:04:46.056332 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-04-02 00:04:46.056345 | orchestrator | 2026-04-02 00:04:46.056358 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-02 00:04:47.897815 | orchestrator | ok: [testbed-manager] 2026-04-02 00:04:47.897848 | orchestrator | 2026-04-02 00:04:47.897855 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-04-02 00:04:48.516069 | orchestrator | ok: [testbed-manager] 2026-04-02 00:04:48.516120 | orchestrator | 2026-04-02 00:04:48.516133 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-04-02 00:04:48.559830 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:04:48.559872 | orchestrator | 2026-04-02 00:04:48.559883 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-04-02 00:04:48.594594 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:04:48.594648 | orchestrator | 2026-04-02 00:04:48.594657 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-04-02 00:04:48.626836 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:04:48.626899 | orchestrator | 2026-04-02 00:04:48.626911 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-04-02 00:04:50.343622 | orchestrator | changed: [testbed-manager] 2026-04-02 00:04:50.343666 | orchestrator | 2026-04-02 00:04:50.343673 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-04-02 00:07:53.211616 | orchestrator | changed: [testbed-manager] 2026-04-02 00:07:53.211767 | orchestrator | 2026-04-02 00:07:53.211788 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-04-02 00:09:14.185557 | orchestrator | changed: [testbed-manager] 2026-04-02 00:09:14.185654 | orchestrator | 2026-04-02 00:09:14.185675 | orchestrator | TASK [Install required packages] *********************************************** 2026-04-02 00:09:39.588245 | orchestrator | changed: [testbed-manager] 2026-04-02 00:09:39.588377 | orchestrator | 2026-04-02 00:09:39.588397 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-04-02 00:09:48.687380 | orchestrator | changed: [testbed-manager] 2026-04-02 00:09:48.687438 | orchestrator | 2026-04-02 00:09:48.687451 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-04-02 00:09:48.733705 | orchestrator | ok: [testbed-manager] 2026-04-02 00:09:48.733743 | orchestrator | 2026-04-02 00:09:48.733751 | orchestrator | TASK [Get current user] ******************************************************** 2026-04-02 00:09:49.521797 | orchestrator | ok: [testbed-manager] 2026-04-02 00:09:49.521899 | orchestrator | 2026-04-02 00:09:49.521924 | orchestrator | TASK [Create venv directory] *************************************************** 2026-04-02 00:09:50.246924 | orchestrator | changed: [testbed-manager] 2026-04-02 00:09:50.248050 | orchestrator | 2026-04-02 00:09:50.248095 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-04-02 00:09:56.135063 | orchestrator | changed: [testbed-manager] 2026-04-02 00:09:56.135130 | orchestrator | 2026-04-02 00:09:56.135140 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-04-02 00:10:02.003191 | orchestrator | changed: [testbed-manager] 2026-04-02 00:10:02.003286 | orchestrator | 2026-04-02 00:10:02.003311 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-04-02 00:10:04.883690 | orchestrator | changed: [testbed-manager] 2026-04-02 00:10:04.883770 | orchestrator | 2026-04-02 00:10:04.883795 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-04-02 00:10:06.626302 | orchestrator | changed: [testbed-manager] 2026-04-02 00:10:06.626629 | orchestrator | 2026-04-02 00:10:06.626655 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-04-02 00:10:07.788504 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-04-02 00:10:07.788633 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-04-02 00:10:07.788649 | orchestrator | 2026-04-02 00:10:07.788667 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-04-02 00:10:07.893908 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-04-02 00:10:07.893964 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-04-02 00:10:07.893970 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-04-02 00:10:07.893976 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-04-02 00:10:10.991270 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-04-02 00:10:10.991317 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-04-02 00:10:10.991342 | orchestrator | 2026-04-02 00:10:10.991354 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-04-02 00:10:11.534306 | orchestrator | changed: [testbed-manager] 2026-04-02 00:10:11.534423 | orchestrator | 2026-04-02 00:10:11.534452 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-04-02 00:11:32.058239 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-04-02 00:11:32.058351 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-04-02 00:11:32.058364 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-04-02 00:11:32.058374 | orchestrator | 2026-04-02 00:11:32.058384 | orchestrator | TASK [Install local collections] *********************************************** 2026-04-02 00:11:34.361152 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-04-02 00:11:34.361968 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-04-02 00:11:34.362005 | orchestrator | 2026-04-02 00:11:34.362038 | orchestrator | PLAY [Create operator user] **************************************************** 2026-04-02 00:11:34.362052 | orchestrator | 2026-04-02 00:11:34.362060 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-02 00:11:35.753777 | orchestrator | ok: [testbed-manager] 2026-04-02 00:11:35.753874 | orchestrator | 2026-04-02 00:11:35.753892 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-04-02 00:11:35.800561 | orchestrator | ok: [testbed-manager] 2026-04-02 00:11:35.800665 | orchestrator | 2026-04-02 00:11:35.800687 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-04-02 00:11:35.855625 | orchestrator | ok: [testbed-manager] 2026-04-02 00:11:35.855708 | orchestrator | 2026-04-02 00:11:35.855723 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-04-02 00:11:36.613763 | orchestrator | changed: [testbed-manager] 2026-04-02 00:11:36.613855 | orchestrator | 2026-04-02 00:11:36.613873 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-04-02 00:11:37.295467 | orchestrator | changed: [testbed-manager] 2026-04-02 00:11:37.295519 | orchestrator | 2026-04-02 00:11:37.295531 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-04-02 00:11:38.541465 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-04-02 00:11:38.541702 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-04-02 00:11:38.541721 | orchestrator | 2026-04-02 00:11:38.541730 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-04-02 00:11:39.883275 | orchestrator | changed: [testbed-manager] 2026-04-02 00:11:39.883533 | orchestrator | 2026-04-02 00:11:39.883554 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-04-02 00:11:41.538489 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-04-02 00:11:41.538533 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-04-02 00:11:41.538550 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-04-02 00:11:41.538557 | orchestrator | 2026-04-02 00:11:41.538567 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-04-02 00:11:41.596635 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:11:41.596724 | orchestrator | 2026-04-02 00:11:41.596739 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-04-02 00:11:41.668156 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:11:41.668257 | orchestrator | 2026-04-02 00:11:41.668281 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-04-02 00:11:42.213992 | orchestrator | changed: [testbed-manager] 2026-04-02 00:11:42.214068 | orchestrator | 2026-04-02 00:11:42.214081 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-04-02 00:11:42.292107 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:11:42.292206 | orchestrator | 2026-04-02 00:11:42.292231 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-04-02 00:11:43.108633 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-02 00:11:43.108675 | orchestrator | changed: [testbed-manager] 2026-04-02 00:11:43.108684 | orchestrator | 2026-04-02 00:11:43.108690 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-04-02 00:11:43.146716 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:11:43.146758 | orchestrator | 2026-04-02 00:11:43.146767 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-04-02 00:11:43.181236 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:11:43.181405 | orchestrator | 2026-04-02 00:11:43.181414 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-04-02 00:11:43.224031 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:11:43.224121 | orchestrator | 2026-04-02 00:11:43.224138 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-04-02 00:11:43.308276 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:11:43.308369 | orchestrator | 2026-04-02 00:11:43.308388 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-04-02 00:11:43.965499 | orchestrator | ok: [testbed-manager] 2026-04-02 00:11:43.965612 | orchestrator | 2026-04-02 00:11:43.965640 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-04-02 00:11:43.965662 | orchestrator | 2026-04-02 00:11:43.965684 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-02 00:11:45.255081 | orchestrator | ok: [testbed-manager] 2026-04-02 00:11:45.255172 | orchestrator | 2026-04-02 00:11:45.255189 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-04-02 00:11:46.172111 | orchestrator | changed: [testbed-manager] 2026-04-02 00:11:46.172169 | orchestrator | 2026-04-02 00:11:46.172178 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-02 00:11:46.172185 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=10 rescued=0 ignored=0 2026-04-02 00:11:46.172191 | orchestrator | 2026-04-02 00:11:46.447154 | orchestrator | ok: Runtime: 0:07:02.769714 2026-04-02 00:11:46.466161 | 2026-04-02 00:11:46.466305 | TASK [Point out that the log in on the manager is now possible] 2026-04-02 00:11:46.519277 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-04-02 00:11:46.534164 | 2026-04-02 00:11:46.534324 | TASK [Point out that the following task takes some time and does not give any output] 2026-04-02 00:11:46.581777 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-04-02 00:11:46.592668 | 2026-04-02 00:11:46.592803 | TASK [Run manager part 1 + 2] 2026-04-02 00:11:47.513058 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-02 00:11:47.567949 | orchestrator | 2026-04-02 00:11:47.568034 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-04-02 00:11:47.568052 | orchestrator | 2026-04-02 00:11:47.568081 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-02 00:11:50.428142 | orchestrator | ok: [testbed-manager] 2026-04-02 00:11:50.428194 | orchestrator | 2026-04-02 00:11:50.428216 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-04-02 00:11:50.473100 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:11:50.473146 | orchestrator | 2026-04-02 00:11:50.473155 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-04-02 00:11:50.512333 | orchestrator | ok: [testbed-manager] 2026-04-02 00:11:50.512376 | orchestrator | 2026-04-02 00:11:50.512384 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-04-02 00:11:50.561161 | orchestrator | ok: [testbed-manager] 2026-04-02 00:11:50.561214 | orchestrator | 2026-04-02 00:11:50.561225 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-04-02 00:11:50.639043 | orchestrator | ok: [testbed-manager] 2026-04-02 00:11:50.639374 | orchestrator | 2026-04-02 00:11:50.639387 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-04-02 00:11:50.728267 | orchestrator | ok: [testbed-manager] 2026-04-02 00:11:50.728341 | orchestrator | 2026-04-02 00:11:50.728351 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-04-02 00:11:50.777357 | orchestrator | included: /home/zuul-testbed04/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-04-02 00:11:50.777402 | orchestrator | 2026-04-02 00:11:50.777408 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-04-02 00:11:51.492279 | orchestrator | ok: [testbed-manager] 2026-04-02 00:11:51.492399 | orchestrator | 2026-04-02 00:11:51.492420 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-04-02 00:11:51.543276 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:11:51.543368 | orchestrator | 2026-04-02 00:11:51.543381 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-04-02 00:11:52.919538 | orchestrator | changed: [testbed-manager] 2026-04-02 00:11:52.919626 | orchestrator | 2026-04-02 00:11:52.919642 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-04-02 00:11:53.500757 | orchestrator | ok: [testbed-manager] 2026-04-02 00:11:53.500848 | orchestrator | 2026-04-02 00:11:53.500864 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-04-02 00:11:54.662510 | orchestrator | changed: [testbed-manager] 2026-04-02 00:11:54.662568 | orchestrator | 2026-04-02 00:11:54.662576 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-04-02 00:12:09.600624 | orchestrator | changed: [testbed-manager] 2026-04-02 00:12:09.600694 | orchestrator | 2026-04-02 00:12:09.600710 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-04-02 00:12:10.276807 | orchestrator | ok: [testbed-manager] 2026-04-02 00:12:10.276896 | orchestrator | 2026-04-02 00:12:10.276914 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-04-02 00:12:10.329667 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:12:10.329758 | orchestrator | 2026-04-02 00:12:10.329773 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-04-02 00:12:11.258148 | orchestrator | changed: [testbed-manager] 2026-04-02 00:12:11.258237 | orchestrator | 2026-04-02 00:12:11.258253 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-04-02 00:12:12.193966 | orchestrator | changed: [testbed-manager] 2026-04-02 00:12:12.194081 | orchestrator | 2026-04-02 00:12:12.194098 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-04-02 00:12:12.777522 | orchestrator | changed: [testbed-manager] 2026-04-02 00:12:12.777629 | orchestrator | 2026-04-02 00:12:12.777655 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-04-02 00:12:12.818788 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-04-02 00:12:12.818861 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-04-02 00:12:12.818869 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-04-02 00:12:12.818875 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-04-02 00:12:14.891654 | orchestrator | changed: [testbed-manager] 2026-04-02 00:12:14.891701 | orchestrator | 2026-04-02 00:12:14.891709 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-04-02 00:12:23.410747 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-04-02 00:12:23.410791 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-04-02 00:12:23.410800 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-04-02 00:12:23.410806 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-04-02 00:12:23.410815 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-04-02 00:12:23.410820 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-04-02 00:12:23.410826 | orchestrator | 2026-04-02 00:12:23.410831 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-04-02 00:12:24.437873 | orchestrator | changed: [testbed-manager] 2026-04-02 00:12:24.437965 | orchestrator | 2026-04-02 00:12:24.437983 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-04-02 00:12:27.456192 | orchestrator | changed: [testbed-manager] 2026-04-02 00:12:27.456252 | orchestrator | 2026-04-02 00:12:27.456427 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-04-02 00:12:27.497678 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:12:27.497752 | orchestrator | 2026-04-02 00:12:27.497766 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-04-02 00:13:58.351664 | orchestrator | changed: [testbed-manager] 2026-04-02 00:13:58.351734 | orchestrator | 2026-04-02 00:13:58.351749 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-04-02 00:13:59.443602 | orchestrator | ok: [testbed-manager] 2026-04-02 00:13:59.443688 | orchestrator | 2026-04-02 00:13:59.443706 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-02 00:13:59.443720 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=4 rescued=0 ignored=0 2026-04-02 00:13:59.443732 | orchestrator | 2026-04-02 00:13:59.713808 | orchestrator | ok: Runtime: 0:02:12.633481 2026-04-02 00:13:59.730829 | 2026-04-02 00:13:59.731018 | TASK [Reboot manager] 2026-04-02 00:14:01.268485 | orchestrator | ok: Runtime: 0:00:00.986883 2026-04-02 00:14:01.287184 | 2026-04-02 00:14:01.287522 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-04-02 00:14:16.030900 | orchestrator | ok 2026-04-02 00:14:16.043782 | 2026-04-02 00:14:16.043939 | TASK [Wait a little longer for the manager so that everything is ready] 2026-04-02 00:15:16.098068 | orchestrator | ok 2026-04-02 00:15:16.109126 | 2026-04-02 00:15:16.109287 | TASK [Deploy manager + bootstrap nodes] 2026-04-02 00:15:18.494789 | orchestrator | 2026-04-02 00:15:18.494901 | orchestrator | # DEPLOY MANAGER 2026-04-02 00:15:18.494911 | orchestrator | 2026-04-02 00:15:18.494917 | orchestrator | + set -e 2026-04-02 00:15:18.494922 | orchestrator | + echo 2026-04-02 00:15:18.494928 | orchestrator | + echo '# DEPLOY MANAGER' 2026-04-02 00:15:18.494935 | orchestrator | + echo 2026-04-02 00:15:18.494990 | orchestrator | + cat /opt/manager-vars.sh 2026-04-02 00:15:18.498362 | orchestrator | export NUMBER_OF_NODES=6 2026-04-02 00:15:18.498456 | orchestrator | 2026-04-02 00:15:18.498471 | orchestrator | export CEPH_VERSION=reef 2026-04-02 00:15:18.498486 | orchestrator | export CONFIGURATION_VERSION=main 2026-04-02 00:15:18.498498 | orchestrator | export MANAGER_VERSION=latest 2026-04-02 00:15:18.498528 | orchestrator | export OPENSTACK_VERSION=2024.2 2026-04-02 00:15:18.498539 | orchestrator | 2026-04-02 00:15:18.498558 | orchestrator | export ARA=false 2026-04-02 00:15:18.498570 | orchestrator | export DEPLOY_MODE=manager 2026-04-02 00:15:18.498587 | orchestrator | export TEMPEST=true 2026-04-02 00:15:18.498599 | orchestrator | export IS_ZUUL=true 2026-04-02 00:15:18.498610 | orchestrator | 2026-04-02 00:15:18.498628 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.251 2026-04-02 00:15:18.498639 | orchestrator | export EXTERNAL_API=false 2026-04-02 00:15:18.498650 | orchestrator | 2026-04-02 00:15:18.498661 | orchestrator | export IMAGE_USER=ubuntu 2026-04-02 00:15:18.498676 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-04-02 00:15:18.498687 | orchestrator | 2026-04-02 00:15:18.498715 | orchestrator | export CEPH_STACK=ceph-ansible 2026-04-02 00:15:18.498749 | orchestrator | 2026-04-02 00:15:18.498761 | orchestrator | + echo 2026-04-02 00:15:18.498774 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-02 00:15:18.499767 | orchestrator | ++ export INTERACTIVE=false 2026-04-02 00:15:18.499798 | orchestrator | ++ INTERACTIVE=false 2026-04-02 00:15:18.499811 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-02 00:15:18.499823 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-02 00:15:18.499982 | orchestrator | + source /opt/manager-vars.sh 2026-04-02 00:15:18.500001 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-02 00:15:18.500012 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-02 00:15:18.500028 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-02 00:15:18.500039 | orchestrator | ++ CEPH_VERSION=reef 2026-04-02 00:15:18.500050 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-02 00:15:18.500065 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-02 00:15:18.500077 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-02 00:15:18.500089 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-02 00:15:18.500103 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-02 00:15:18.500130 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-02 00:15:18.500141 | orchestrator | ++ export ARA=false 2026-04-02 00:15:18.500153 | orchestrator | ++ ARA=false 2026-04-02 00:15:18.500164 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-02 00:15:18.500175 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-02 00:15:18.500187 | orchestrator | ++ export TEMPEST=true 2026-04-02 00:15:18.500198 | orchestrator | ++ TEMPEST=true 2026-04-02 00:15:18.500213 | orchestrator | ++ export IS_ZUUL=true 2026-04-02 00:15:18.500224 | orchestrator | ++ IS_ZUUL=true 2026-04-02 00:15:18.500236 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.251 2026-04-02 00:15:18.500247 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.251 2026-04-02 00:15:18.500258 | orchestrator | ++ export EXTERNAL_API=false 2026-04-02 00:15:18.500269 | orchestrator | ++ EXTERNAL_API=false 2026-04-02 00:15:18.500284 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-02 00:15:18.500295 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-02 00:15:18.500307 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-02 00:15:18.500317 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-02 00:15:18.500329 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-02 00:15:18.500340 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-02 00:15:18.500399 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-04-02 00:15:18.554578 | orchestrator | + docker version 2026-04-02 00:15:18.668033 | orchestrator | Client: Docker Engine - Community 2026-04-02 00:15:18.668132 | orchestrator | Version: 27.5.1 2026-04-02 00:15:18.668147 | orchestrator | API version: 1.47 2026-04-02 00:15:18.668161 | orchestrator | Go version: go1.22.11 2026-04-02 00:15:18.668172 | orchestrator | Git commit: 9f9e405 2026-04-02 00:15:18.668183 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-04-02 00:15:18.668195 | orchestrator | OS/Arch: linux/amd64 2026-04-02 00:15:18.668205 | orchestrator | Context: default 2026-04-02 00:15:18.668216 | orchestrator | 2026-04-02 00:15:18.668228 | orchestrator | Server: Docker Engine - Community 2026-04-02 00:15:18.668239 | orchestrator | Engine: 2026-04-02 00:15:18.668250 | orchestrator | Version: 27.5.1 2026-04-02 00:15:18.668261 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-04-02 00:15:18.668304 | orchestrator | Go version: go1.22.11 2026-04-02 00:15:18.668315 | orchestrator | Git commit: 4c9b3b0 2026-04-02 00:15:18.668326 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-04-02 00:15:18.668337 | orchestrator | OS/Arch: linux/amd64 2026-04-02 00:15:18.668348 | orchestrator | Experimental: false 2026-04-02 00:15:18.668359 | orchestrator | containerd: 2026-04-02 00:15:18.668370 | orchestrator | Version: v2.2.2 2026-04-02 00:15:18.668381 | orchestrator | GitCommit: 301b2dac98f15c27117da5c8af12118a041a31d9 2026-04-02 00:15:18.668392 | orchestrator | runc: 2026-04-02 00:15:18.668403 | orchestrator | Version: 1.3.4 2026-04-02 00:15:18.668415 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-04-02 00:15:18.668425 | orchestrator | docker-init: 2026-04-02 00:15:18.668436 | orchestrator | Version: 0.19.0 2026-04-02 00:15:18.668448 | orchestrator | GitCommit: de40ad0 2026-04-02 00:15:18.670662 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-04-02 00:15:18.679993 | orchestrator | + set -e 2026-04-02 00:15:18.680043 | orchestrator | + source /opt/manager-vars.sh 2026-04-02 00:15:18.680055 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-02 00:15:18.680067 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-02 00:15:18.680077 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-02 00:15:18.680088 | orchestrator | ++ CEPH_VERSION=reef 2026-04-02 00:15:18.680100 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-02 00:15:18.680121 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-02 00:15:18.680148 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-02 00:15:18.680169 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-02 00:15:18.680187 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-02 00:15:18.680205 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-02 00:15:18.680223 | orchestrator | ++ export ARA=false 2026-04-02 00:15:18.680241 | orchestrator | ++ ARA=false 2026-04-02 00:15:18.680260 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-02 00:15:18.680280 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-02 00:15:18.680297 | orchestrator | ++ export TEMPEST=true 2026-04-02 00:15:18.680309 | orchestrator | ++ TEMPEST=true 2026-04-02 00:15:18.680320 | orchestrator | ++ export IS_ZUUL=true 2026-04-02 00:15:18.680330 | orchestrator | ++ IS_ZUUL=true 2026-04-02 00:15:18.680341 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.251 2026-04-02 00:15:18.680352 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.251 2026-04-02 00:15:18.680362 | orchestrator | ++ export EXTERNAL_API=false 2026-04-02 00:15:18.680373 | orchestrator | ++ EXTERNAL_API=false 2026-04-02 00:15:18.680384 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-02 00:15:18.680394 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-02 00:15:18.680405 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-02 00:15:18.680415 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-02 00:15:18.680426 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-02 00:15:18.680437 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-02 00:15:18.680448 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-02 00:15:18.680458 | orchestrator | ++ export INTERACTIVE=false 2026-04-02 00:15:18.680469 | orchestrator | ++ INTERACTIVE=false 2026-04-02 00:15:18.680488 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-02 00:15:18.680504 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-02 00:15:18.680515 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-02 00:15:18.680526 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-02 00:15:18.680537 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2026-04-02 00:15:18.687363 | orchestrator | + set -e 2026-04-02 00:15:18.687414 | orchestrator | + VERSION=reef 2026-04-02 00:15:18.689068 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2026-04-02 00:15:18.694594 | orchestrator | + [[ -n ceph_version: reef ]] 2026-04-02 00:15:18.694646 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2026-04-02 00:15:18.700270 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2026-04-02 00:15:18.705902 | orchestrator | + set -e 2026-04-02 00:15:18.706357 | orchestrator | + VERSION=2024.2 2026-04-02 00:15:18.706993 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2026-04-02 00:15:18.710719 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2026-04-02 00:15:18.710775 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2026-04-02 00:15:18.715759 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-04-02 00:15:18.716682 | orchestrator | ++ semver latest 7.0.0 2026-04-02 00:15:18.780891 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-02 00:15:18.780994 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-02 00:15:18.781005 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-04-02 00:15:18.781789 | orchestrator | ++ semver latest 10.0.0-0 2026-04-02 00:15:18.846515 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-02 00:15:18.847669 | orchestrator | ++ semver 2024.2 2025.1 2026-04-02 00:15:18.911612 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-02 00:15:18.911712 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-04-02 00:15:19.004299 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-02 00:15:19.005442 | orchestrator | + source /opt/venv/bin/activate 2026-04-02 00:15:19.006792 | orchestrator | ++ deactivate nondestructive 2026-04-02 00:15:19.006855 | orchestrator | ++ '[' -n '' ']' 2026-04-02 00:15:19.006875 | orchestrator | ++ '[' -n '' ']' 2026-04-02 00:15:19.006896 | orchestrator | ++ hash -r 2026-04-02 00:15:19.006915 | orchestrator | ++ '[' -n '' ']' 2026-04-02 00:15:19.006935 | orchestrator | ++ unset VIRTUAL_ENV 2026-04-02 00:15:19.006976 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-04-02 00:15:19.006999 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-04-02 00:15:19.007032 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-04-02 00:15:19.007053 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-04-02 00:15:19.007072 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-04-02 00:15:19.007090 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-04-02 00:15:19.007111 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-02 00:15:19.007131 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-02 00:15:19.007149 | orchestrator | ++ export PATH 2026-04-02 00:15:19.007167 | orchestrator | ++ '[' -n '' ']' 2026-04-02 00:15:19.007190 | orchestrator | ++ '[' -z '' ']' 2026-04-02 00:15:19.007210 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-04-02 00:15:19.007232 | orchestrator | ++ PS1='(venv) ' 2026-04-02 00:15:19.007252 | orchestrator | ++ export PS1 2026-04-02 00:15:19.007264 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-04-02 00:15:19.007276 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-04-02 00:15:19.007286 | orchestrator | ++ hash -r 2026-04-02 00:15:19.007320 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-04-02 00:15:20.019546 | orchestrator | 2026-04-02 00:15:20.019656 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-04-02 00:15:20.019677 | orchestrator | 2026-04-02 00:15:20.019692 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-04-02 00:15:20.562376 | orchestrator | ok: [testbed-manager] 2026-04-02 00:15:20.562461 | orchestrator | 2026-04-02 00:15:20.562473 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-04-02 00:15:21.503098 | orchestrator | changed: [testbed-manager] 2026-04-02 00:15:21.593391 | orchestrator | 2026-04-02 00:15:21.593472 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-04-02 00:15:21.593487 | orchestrator | 2026-04-02 00:15:21.593499 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-02 00:15:23.949485 | orchestrator | ok: [testbed-manager] 2026-04-02 00:15:23.949623 | orchestrator | 2026-04-02 00:15:23.949642 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-04-02 00:15:23.995104 | orchestrator | ok: [testbed-manager] 2026-04-02 00:15:23.995198 | orchestrator | 2026-04-02 00:15:23.995218 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-04-02 00:15:24.439589 | orchestrator | changed: [testbed-manager] 2026-04-02 00:15:24.439686 | orchestrator | 2026-04-02 00:15:24.439703 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-04-02 00:15:24.481403 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:15:24.481487 | orchestrator | 2026-04-02 00:15:24.481501 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-04-02 00:15:24.841402 | orchestrator | changed: [testbed-manager] 2026-04-02 00:15:24.841533 | orchestrator | 2026-04-02 00:15:24.841551 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-04-02 00:15:25.153533 | orchestrator | ok: [testbed-manager] 2026-04-02 00:15:25.153641 | orchestrator | 2026-04-02 00:15:25.153660 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-04-02 00:15:25.253099 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:15:25.253211 | orchestrator | 2026-04-02 00:15:25.253236 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-04-02 00:15:25.253256 | orchestrator | 2026-04-02 00:15:25.253273 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-02 00:15:27.082083 | orchestrator | ok: [testbed-manager] 2026-04-02 00:15:27.082188 | orchestrator | 2026-04-02 00:15:27.082205 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-04-02 00:15:27.170611 | orchestrator | included: osism.services.traefik for testbed-manager 2026-04-02 00:15:27.170731 | orchestrator | 2026-04-02 00:15:27.170748 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-04-02 00:15:27.238250 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-04-02 00:15:27.238348 | orchestrator | 2026-04-02 00:15:27.238365 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-04-02 00:15:28.315171 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-04-02 00:15:28.315291 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-04-02 00:15:28.315317 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-04-02 00:15:28.315336 | orchestrator | 2026-04-02 00:15:28.315355 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-04-02 00:15:30.135598 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-04-02 00:15:30.135687 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-04-02 00:15:30.135699 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-04-02 00:15:30.135709 | orchestrator | 2026-04-02 00:15:30.135719 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-04-02 00:15:30.752641 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-02 00:15:30.752744 | orchestrator | changed: [testbed-manager] 2026-04-02 00:15:30.752760 | orchestrator | 2026-04-02 00:15:30.752773 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-04-02 00:15:31.401015 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-02 00:15:31.401136 | orchestrator | changed: [testbed-manager] 2026-04-02 00:15:31.401162 | orchestrator | 2026-04-02 00:15:31.401178 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-04-02 00:15:31.459904 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:15:31.460036 | orchestrator | 2026-04-02 00:15:31.460052 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-04-02 00:15:31.812677 | orchestrator | ok: [testbed-manager] 2026-04-02 00:15:31.812779 | orchestrator | 2026-04-02 00:15:31.812794 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-04-02 00:15:31.888661 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-04-02 00:15:31.888751 | orchestrator | 2026-04-02 00:15:31.888765 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-04-02 00:15:32.992257 | orchestrator | changed: [testbed-manager] 2026-04-02 00:15:32.992358 | orchestrator | 2026-04-02 00:15:32.992374 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-04-02 00:15:33.864838 | orchestrator | changed: [testbed-manager] 2026-04-02 00:15:33.864934 | orchestrator | 2026-04-02 00:15:33.864951 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-04-02 00:15:44.281968 | orchestrator | changed: [testbed-manager] 2026-04-02 00:15:44.282074 | orchestrator | 2026-04-02 00:15:44.282099 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-04-02 00:15:44.338248 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:15:44.338338 | orchestrator | 2026-04-02 00:15:44.338355 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-04-02 00:15:44.338368 | orchestrator | 2026-04-02 00:15:44.338380 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-02 00:15:46.120715 | orchestrator | ok: [testbed-manager] 2026-04-02 00:15:46.120768 | orchestrator | 2026-04-02 00:15:46.120791 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-04-02 00:15:46.221838 | orchestrator | included: osism.services.manager for testbed-manager 2026-04-02 00:15:46.221922 | orchestrator | 2026-04-02 00:15:46.221928 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-04-02 00:15:46.279604 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-04-02 00:15:46.279644 | orchestrator | 2026-04-02 00:15:46.279650 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-04-02 00:15:48.808401 | orchestrator | ok: [testbed-manager] 2026-04-02 00:15:48.808509 | orchestrator | 2026-04-02 00:15:48.808526 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-04-02 00:15:48.870587 | orchestrator | ok: [testbed-manager] 2026-04-02 00:15:48.870677 | orchestrator | 2026-04-02 00:15:48.870691 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-04-02 00:15:48.990799 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-04-02 00:15:48.990904 | orchestrator | 2026-04-02 00:15:48.990920 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-04-02 00:15:51.843095 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-04-02 00:15:51.843193 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-04-02 00:15:51.843208 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-04-02 00:15:51.843220 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-04-02 00:15:51.843231 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-04-02 00:15:51.843243 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-04-02 00:15:51.843254 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-04-02 00:15:51.843265 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-04-02 00:15:51.843277 | orchestrator | 2026-04-02 00:15:51.843289 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-04-02 00:15:52.463426 | orchestrator | changed: [testbed-manager] 2026-04-02 00:15:52.463511 | orchestrator | 2026-04-02 00:15:52.463523 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-04-02 00:15:53.076744 | orchestrator | changed: [testbed-manager] 2026-04-02 00:15:53.076841 | orchestrator | 2026-04-02 00:15:53.076893 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-04-02 00:15:53.161368 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-04-02 00:15:53.161455 | orchestrator | 2026-04-02 00:15:53.161471 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-04-02 00:15:54.372421 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-04-02 00:15:54.372496 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-04-02 00:15:54.372512 | orchestrator | 2026-04-02 00:15:54.372524 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-04-02 00:15:55.006260 | orchestrator | changed: [testbed-manager] 2026-04-02 00:15:55.006305 | orchestrator | 2026-04-02 00:15:55.006312 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-04-02 00:15:55.067259 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:15:55.067344 | orchestrator | 2026-04-02 00:15:55.067362 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-04-02 00:15:55.148669 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-04-02 00:15:55.148741 | orchestrator | 2026-04-02 00:15:55.148761 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-04-02 00:15:55.796785 | orchestrator | changed: [testbed-manager] 2026-04-02 00:15:55.796897 | orchestrator | 2026-04-02 00:15:55.796914 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-04-02 00:15:55.867382 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-04-02 00:15:55.867475 | orchestrator | 2026-04-02 00:15:55.867489 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-04-02 00:15:57.258442 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-02 00:15:57.258530 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-02 00:15:57.258545 | orchestrator | changed: [testbed-manager] 2026-04-02 00:15:57.258558 | orchestrator | 2026-04-02 00:15:57.258570 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-04-02 00:15:57.973320 | orchestrator | changed: [testbed-manager] 2026-04-02 00:15:57.973373 | orchestrator | 2026-04-02 00:15:57.973383 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-04-02 00:15:58.039207 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:15:58.039291 | orchestrator | 2026-04-02 00:15:58.039307 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-04-02 00:15:58.136882 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-04-02 00:15:58.136961 | orchestrator | 2026-04-02 00:15:58.136976 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-04-02 00:15:58.660436 | orchestrator | changed: [testbed-manager] 2026-04-02 00:15:58.660517 | orchestrator | 2026-04-02 00:15:58.660553 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-04-02 00:15:59.070314 | orchestrator | changed: [testbed-manager] 2026-04-02 00:15:59.070420 | orchestrator | 2026-04-02 00:15:59.070440 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-04-02 00:16:00.317403 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-04-02 00:16:00.317498 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-04-02 00:16:00.317510 | orchestrator | 2026-04-02 00:16:00.317520 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-04-02 00:16:00.972813 | orchestrator | changed: [testbed-manager] 2026-04-02 00:16:00.972915 | orchestrator | 2026-04-02 00:16:00.972930 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-04-02 00:16:01.336800 | orchestrator | ok: [testbed-manager] 2026-04-02 00:16:01.336954 | orchestrator | 2026-04-02 00:16:01.336972 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-04-02 00:16:01.690959 | orchestrator | changed: [testbed-manager] 2026-04-02 00:16:01.691056 | orchestrator | 2026-04-02 00:16:01.691072 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-04-02 00:16:01.737235 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:16:01.737321 | orchestrator | 2026-04-02 00:16:01.737336 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-04-02 00:16:01.820896 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-04-02 00:16:01.820978 | orchestrator | 2026-04-02 00:16:01.820992 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-04-02 00:16:01.876920 | orchestrator | ok: [testbed-manager] 2026-04-02 00:16:01.877003 | orchestrator | 2026-04-02 00:16:01.877024 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-04-02 00:16:03.917460 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-04-02 00:16:03.917579 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-04-02 00:16:03.917600 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-04-02 00:16:03.917617 | orchestrator | 2026-04-02 00:16:03.917633 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-04-02 00:16:04.643194 | orchestrator | changed: [testbed-manager] 2026-04-02 00:16:04.643329 | orchestrator | 2026-04-02 00:16:04.643356 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-04-02 00:16:05.352173 | orchestrator | changed: [testbed-manager] 2026-04-02 00:16:05.352282 | orchestrator | 2026-04-02 00:16:05.352299 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-04-02 00:16:06.090000 | orchestrator | changed: [testbed-manager] 2026-04-02 00:16:06.090147 | orchestrator | 2026-04-02 00:16:06.090167 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-04-02 00:16:06.170589 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-04-02 00:16:06.170662 | orchestrator | 2026-04-02 00:16:06.170673 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-04-02 00:16:06.214274 | orchestrator | ok: [testbed-manager] 2026-04-02 00:16:06.214428 | orchestrator | 2026-04-02 00:16:06.214443 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-04-02 00:16:06.909565 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-04-02 00:16:06.909643 | orchestrator | 2026-04-02 00:16:06.909655 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-04-02 00:16:06.989744 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-04-02 00:16:06.989884 | orchestrator | 2026-04-02 00:16:06.989902 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-04-02 00:16:07.716866 | orchestrator | changed: [testbed-manager] 2026-04-02 00:16:07.716953 | orchestrator | 2026-04-02 00:16:07.716976 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-04-02 00:16:08.419949 | orchestrator | ok: [testbed-manager] 2026-04-02 00:16:08.420029 | orchestrator | 2026-04-02 00:16:08.420041 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-04-02 00:16:08.477596 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:16:08.477689 | orchestrator | 2026-04-02 00:16:08.477704 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-04-02 00:16:08.539043 | orchestrator | ok: [testbed-manager] 2026-04-02 00:16:08.539157 | orchestrator | 2026-04-02 00:16:08.539182 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-04-02 00:16:09.401599 | orchestrator | changed: [testbed-manager] 2026-04-02 00:16:09.401697 | orchestrator | 2026-04-02 00:16:09.401711 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-04-02 00:17:18.281055 | orchestrator | changed: [testbed-manager] 2026-04-02 00:17:18.281197 | orchestrator | 2026-04-02 00:17:18.281258 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-04-02 00:17:19.238459 | orchestrator | ok: [testbed-manager] 2026-04-02 00:17:19.238609 | orchestrator | 2026-04-02 00:17:19.238636 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-04-02 00:17:19.300720 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:17:19.300807 | orchestrator | 2026-04-02 00:17:19.300820 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-04-02 00:17:25.454676 | orchestrator | changed: [testbed-manager] 2026-04-02 00:17:25.454790 | orchestrator | 2026-04-02 00:17:25.454807 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-04-02 00:17:25.529373 | orchestrator | ok: [testbed-manager] 2026-04-02 00:17:25.529473 | orchestrator | 2026-04-02 00:17:25.529511 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-04-02 00:17:25.529525 | orchestrator | 2026-04-02 00:17:25.529536 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-04-02 00:17:25.576957 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:17:25.577075 | orchestrator | 2026-04-02 00:17:25.577103 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-04-02 00:18:25.619190 | orchestrator | Pausing for 60 seconds 2026-04-02 00:18:25.619294 | orchestrator | changed: [testbed-manager] 2026-04-02 00:18:25.619307 | orchestrator | 2026-04-02 00:18:25.619318 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-04-02 00:18:28.552648 | orchestrator | changed: [testbed-manager] 2026-04-02 00:18:28.552767 | orchestrator | 2026-04-02 00:18:28.552783 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-04-02 00:19:10.006983 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-04-02 00:19:10.007093 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-04-02 00:19:10.007109 | orchestrator | changed: [testbed-manager] 2026-04-02 00:19:10.007148 | orchestrator | 2026-04-02 00:19:10.007161 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-04-02 00:19:15.582643 | orchestrator | changed: [testbed-manager] 2026-04-02 00:19:15.582755 | orchestrator | 2026-04-02 00:19:15.582772 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-04-02 00:19:15.667020 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-04-02 00:19:15.667142 | orchestrator | 2026-04-02 00:19:15.667169 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-04-02 00:19:15.667191 | orchestrator | 2026-04-02 00:19:15.667212 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-04-02 00:19:15.722226 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:19:15.722324 | orchestrator | 2026-04-02 00:19:15.722341 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-04-02 00:19:15.792975 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-04-02 00:19:15.793076 | orchestrator | 2026-04-02 00:19:15.793091 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-04-02 00:19:16.573665 | orchestrator | changed: [testbed-manager] 2026-04-02 00:19:16.573790 | orchestrator | 2026-04-02 00:19:16.573815 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-04-02 00:19:19.726593 | orchestrator | ok: [testbed-manager] 2026-04-02 00:19:19.726683 | orchestrator | 2026-04-02 00:19:19.726698 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-04-02 00:19:19.793115 | orchestrator | ok: [testbed-manager] => { 2026-04-02 00:19:19.793211 | orchestrator | "version_check_result.stdout_lines": [ 2026-04-02 00:19:19.793227 | orchestrator | "=== OSISM Container Version Check ===", 2026-04-02 00:19:19.793240 | orchestrator | "Checking running containers against expected versions...", 2026-04-02 00:19:19.793253 | orchestrator | "", 2026-04-02 00:19:19.793268 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-04-02 00:19:19.793280 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:latest", 2026-04-02 00:19:19.793291 | orchestrator | " Enabled: true", 2026-04-02 00:19:19.793302 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:latest", 2026-04-02 00:19:19.793313 | orchestrator | " Status: ✅ MATCH", 2026-04-02 00:19:19.793325 | orchestrator | "", 2026-04-02 00:19:19.793336 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-04-02 00:19:19.793347 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:latest", 2026-04-02 00:19:19.793358 | orchestrator | " Enabled: true", 2026-04-02 00:19:19.793369 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:latest", 2026-04-02 00:19:19.793380 | orchestrator | " Status: ✅ MATCH", 2026-04-02 00:19:19.793391 | orchestrator | "", 2026-04-02 00:19:19.793402 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-04-02 00:19:19.793414 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:latest", 2026-04-02 00:19:19.793483 | orchestrator | " Enabled: true", 2026-04-02 00:19:19.793496 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:latest", 2026-04-02 00:19:19.793507 | orchestrator | " Status: ✅ MATCH", 2026-04-02 00:19:19.793518 | orchestrator | "", 2026-04-02 00:19:19.793529 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-04-02 00:19:19.793541 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:reef", 2026-04-02 00:19:19.793552 | orchestrator | " Enabled: true", 2026-04-02 00:19:19.793563 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:reef", 2026-04-02 00:19:19.793575 | orchestrator | " Status: ✅ MATCH", 2026-04-02 00:19:19.793586 | orchestrator | "", 2026-04-02 00:19:19.793596 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-04-02 00:19:19.793607 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:2024.2", 2026-04-02 00:19:19.793643 | orchestrator | " Enabled: true", 2026-04-02 00:19:19.793655 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:2024.2", 2026-04-02 00:19:19.793666 | orchestrator | " Status: ✅ MATCH", 2026-04-02 00:19:19.793677 | orchestrator | "", 2026-04-02 00:19:19.793688 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-04-02 00:19:19.793699 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-04-02 00:19:19.793710 | orchestrator | " Enabled: true", 2026-04-02 00:19:19.793721 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-04-02 00:19:19.793731 | orchestrator | " Status: ✅ MATCH", 2026-04-02 00:19:19.793742 | orchestrator | "", 2026-04-02 00:19:19.793753 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-04-02 00:19:19.793764 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-04-02 00:19:19.793775 | orchestrator | " Enabled: true", 2026-04-02 00:19:19.793786 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-04-02 00:19:19.793797 | orchestrator | " Status: ✅ MATCH", 2026-04-02 00:19:19.793808 | orchestrator | "", 2026-04-02 00:19:19.793819 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-04-02 00:19:19.793830 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-04-02 00:19:19.793841 | orchestrator | " Enabled: true", 2026-04-02 00:19:19.793852 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-04-02 00:19:19.793863 | orchestrator | " Status: ✅ MATCH", 2026-04-02 00:19:19.793874 | orchestrator | "", 2026-04-02 00:19:19.793894 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-04-02 00:19:19.793905 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:latest", 2026-04-02 00:19:19.793921 | orchestrator | " Enabled: true", 2026-04-02 00:19:19.793932 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:latest", 2026-04-02 00:19:19.793944 | orchestrator | " Status: ✅ MATCH", 2026-04-02 00:19:19.793955 | orchestrator | "", 2026-04-02 00:19:19.793965 | orchestrator | "Checking service: redis (Redis Cache)", 2026-04-02 00:19:19.793976 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-04-02 00:19:19.793987 | orchestrator | " Enabled: true", 2026-04-02 00:19:19.793998 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-04-02 00:19:19.794009 | orchestrator | " Status: ✅ MATCH", 2026-04-02 00:19:19.794076 | orchestrator | "", 2026-04-02 00:19:19.794088 | orchestrator | "Checking service: api (OSISM API Service)", 2026-04-02 00:19:19.794099 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-04-02 00:19:19.794110 | orchestrator | " Enabled: true", 2026-04-02 00:19:19.794121 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-04-02 00:19:19.794132 | orchestrator | " Status: ✅ MATCH", 2026-04-02 00:19:19.794143 | orchestrator | "", 2026-04-02 00:19:19.794153 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-04-02 00:19:19.794164 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-04-02 00:19:19.794175 | orchestrator | " Enabled: true", 2026-04-02 00:19:19.794186 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-04-02 00:19:19.794197 | orchestrator | " Status: ✅ MATCH", 2026-04-02 00:19:19.794208 | orchestrator | "", 2026-04-02 00:19:19.794219 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-04-02 00:19:19.794230 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-04-02 00:19:19.794241 | orchestrator | " Enabled: true", 2026-04-02 00:19:19.794252 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-04-02 00:19:19.794263 | orchestrator | " Status: ✅ MATCH", 2026-04-02 00:19:19.794273 | orchestrator | "", 2026-04-02 00:19:19.794284 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-04-02 00:19:19.794295 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-04-02 00:19:19.794306 | orchestrator | " Enabled: true", 2026-04-02 00:19:19.794317 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-04-02 00:19:19.794336 | orchestrator | " Status: ✅ MATCH", 2026-04-02 00:19:19.794347 | orchestrator | "", 2026-04-02 00:19:19.794358 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-04-02 00:19:19.794387 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-04-02 00:19:19.794399 | orchestrator | " Enabled: true", 2026-04-02 00:19:19.794410 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-04-02 00:19:19.794421 | orchestrator | " Status: ✅ MATCH", 2026-04-02 00:19:19.794462 | orchestrator | "", 2026-04-02 00:19:19.794478 | orchestrator | "=== Summary ===", 2026-04-02 00:19:19.794495 | orchestrator | "Errors (version mismatches): 0", 2026-04-02 00:19:19.794514 | orchestrator | "Warnings (expected containers not running): 0", 2026-04-02 00:19:19.794533 | orchestrator | "", 2026-04-02 00:19:19.794550 | orchestrator | "✅ All running containers match expected versions!" 2026-04-02 00:19:19.794562 | orchestrator | ] 2026-04-02 00:19:19.794573 | orchestrator | } 2026-04-02 00:19:19.794584 | orchestrator | 2026-04-02 00:19:19.794596 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-04-02 00:19:19.838692 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:19:19.838788 | orchestrator | 2026-04-02 00:19:19.838802 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-02 00:19:19.838820 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2026-04-02 00:19:19.838838 | orchestrator | 2026-04-02 00:19:19.935621 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-02 00:19:19.935685 | orchestrator | + deactivate 2026-04-02 00:19:19.935694 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-04-02 00:19:19.935707 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-02 00:19:19.935715 | orchestrator | + export PATH 2026-04-02 00:19:19.935723 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-04-02 00:19:19.935731 | orchestrator | + '[' -n '' ']' 2026-04-02 00:19:19.935738 | orchestrator | + hash -r 2026-04-02 00:19:19.935745 | orchestrator | + '[' -n '' ']' 2026-04-02 00:19:19.935752 | orchestrator | + unset VIRTUAL_ENV 2026-04-02 00:19:19.935760 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-04-02 00:19:19.935767 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-04-02 00:19:19.935774 | orchestrator | + unset -f deactivate 2026-04-02 00:19:19.935782 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-04-02 00:19:19.943962 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-02 00:19:19.943979 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-04-02 00:19:19.943987 | orchestrator | + local max_attempts=60 2026-04-02 00:19:19.943994 | orchestrator | + local name=ceph-ansible 2026-04-02 00:19:19.944001 | orchestrator | + local attempt_num=1 2026-04-02 00:19:19.944936 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-02 00:19:19.989341 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-02 00:19:19.989496 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-04-02 00:19:19.989522 | orchestrator | + local max_attempts=60 2026-04-02 00:19:19.989541 | orchestrator | + local name=kolla-ansible 2026-04-02 00:19:19.989560 | orchestrator | + local attempt_num=1 2026-04-02 00:19:19.990375 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-04-02 00:19:20.029865 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-02 00:19:20.029945 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-04-02 00:19:20.029958 | orchestrator | + local max_attempts=60 2026-04-02 00:19:20.029970 | orchestrator | + local name=osism-ansible 2026-04-02 00:19:20.029980 | orchestrator | + local attempt_num=1 2026-04-02 00:19:20.030525 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-04-02 00:19:20.073237 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-02 00:19:20.073315 | orchestrator | + [[ true == \t\r\u\e ]] 2026-04-02 00:19:20.073324 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-04-02 00:19:20.777967 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-04-02 00:19:20.934927 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-04-02 00:19:20.935059 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2026-04-02 00:19:20.935077 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2026-04-02 00:19:20.935090 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2026-04-02 00:19:20.935102 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2026-04-02 00:19:20.935114 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2026-04-02 00:19:20.935124 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2026-04-02 00:19:20.935135 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 52 seconds (healthy) 2026-04-02 00:19:20.935164 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2026-04-02 00:19:20.935175 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2026-04-02 00:19:20.935186 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2026-04-02 00:19:20.935197 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2026-04-02 00:19:20.935208 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2026-04-02 00:19:20.935219 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" frontend About a minute ago Up About a minute 192.168.16.5:3000->3000/tcp 2026-04-02 00:19:20.935230 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2026-04-02 00:19:20.935240 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2026-04-02 00:19:20.941652 | orchestrator | ++ semver latest 7.0.0 2026-04-02 00:19:20.983895 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-02 00:19:20.983971 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-02 00:19:20.983981 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-04-02 00:19:20.989644 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-04-02 00:19:33.523705 | orchestrator | 2026-04-02 00:19:33 | INFO  | Prepare task for execution of resolvconf. 2026-04-02 00:19:33.737931 | orchestrator | 2026-04-02 00:19:33 | INFO  | Task 9b600d58-7a7d-4311-a2c3-9447cb7792f4 (resolvconf) was prepared for execution. 2026-04-02 00:19:33.738090 | orchestrator | 2026-04-02 00:19:33 | INFO  | It takes a moment until task 9b600d58-7a7d-4311-a2c3-9447cb7792f4 (resolvconf) has been started and output is visible here. 2026-04-02 00:19:46.700675 | orchestrator | 2026-04-02 00:19:46.700785 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-04-02 00:19:46.700802 | orchestrator | 2026-04-02 00:19:46.700814 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-02 00:19:46.700826 | orchestrator | Thursday 02 April 2026 00:19:36 +0000 (0:00:00.172) 0:00:00.172 ******** 2026-04-02 00:19:46.700837 | orchestrator | ok: [testbed-manager] 2026-04-02 00:19:46.700849 | orchestrator | 2026-04-02 00:19:46.700860 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-04-02 00:19:46.700873 | orchestrator | Thursday 02 April 2026 00:19:40 +0000 (0:00:03.728) 0:00:03.901 ******** 2026-04-02 00:19:46.700884 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:19:46.700895 | orchestrator | 2026-04-02 00:19:46.700906 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-04-02 00:19:46.700917 | orchestrator | Thursday 02 April 2026 00:19:40 +0000 (0:00:00.064) 0:00:03.965 ******** 2026-04-02 00:19:46.700928 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-04-02 00:19:46.700940 | orchestrator | 2026-04-02 00:19:46.700951 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-04-02 00:19:46.700962 | orchestrator | Thursday 02 April 2026 00:19:40 +0000 (0:00:00.076) 0:00:04.042 ******** 2026-04-02 00:19:46.700984 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-04-02 00:19:46.700995 | orchestrator | 2026-04-02 00:19:46.701007 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-04-02 00:19:46.701018 | orchestrator | Thursday 02 April 2026 00:19:40 +0000 (0:00:00.065) 0:00:04.107 ******** 2026-04-02 00:19:46.701028 | orchestrator | ok: [testbed-manager] 2026-04-02 00:19:46.701040 | orchestrator | 2026-04-02 00:19:46.701051 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-04-02 00:19:46.701062 | orchestrator | Thursday 02 April 2026 00:19:41 +0000 (0:00:01.126) 0:00:05.234 ******** 2026-04-02 00:19:46.701073 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:19:46.701084 | orchestrator | 2026-04-02 00:19:46.701095 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-04-02 00:19:46.701106 | orchestrator | Thursday 02 April 2026 00:19:41 +0000 (0:00:00.056) 0:00:05.290 ******** 2026-04-02 00:19:46.701117 | orchestrator | ok: [testbed-manager] 2026-04-02 00:19:46.701128 | orchestrator | 2026-04-02 00:19:46.701139 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-04-02 00:19:46.701150 | orchestrator | Thursday 02 April 2026 00:19:42 +0000 (0:00:00.576) 0:00:05.866 ******** 2026-04-02 00:19:46.701160 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:19:46.701171 | orchestrator | 2026-04-02 00:19:46.701182 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-04-02 00:19:46.701194 | orchestrator | Thursday 02 April 2026 00:19:42 +0000 (0:00:00.078) 0:00:05.944 ******** 2026-04-02 00:19:46.701205 | orchestrator | changed: [testbed-manager] 2026-04-02 00:19:46.701218 | orchestrator | 2026-04-02 00:19:46.701231 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-04-02 00:19:46.701243 | orchestrator | Thursday 02 April 2026 00:19:43 +0000 (0:00:00.581) 0:00:06.526 ******** 2026-04-02 00:19:46.701255 | orchestrator | changed: [testbed-manager] 2026-04-02 00:19:46.701267 | orchestrator | 2026-04-02 00:19:46.701280 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-04-02 00:19:46.701292 | orchestrator | Thursday 02 April 2026 00:19:44 +0000 (0:00:01.111) 0:00:07.638 ******** 2026-04-02 00:19:46.701304 | orchestrator | ok: [testbed-manager] 2026-04-02 00:19:46.701317 | orchestrator | 2026-04-02 00:19:46.701350 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-04-02 00:19:46.701363 | orchestrator | Thursday 02 April 2026 00:19:45 +0000 (0:00:00.997) 0:00:08.635 ******** 2026-04-02 00:19:46.701375 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-04-02 00:19:46.701413 | orchestrator | 2026-04-02 00:19:46.701426 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-04-02 00:19:46.701439 | orchestrator | Thursday 02 April 2026 00:19:45 +0000 (0:00:00.082) 0:00:08.717 ******** 2026-04-02 00:19:46.701451 | orchestrator | changed: [testbed-manager] 2026-04-02 00:19:46.701464 | orchestrator | 2026-04-02 00:19:46.701477 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-02 00:19:46.701492 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-02 00:19:46.701512 | orchestrator | 2026-04-02 00:19:46.701530 | orchestrator | 2026-04-02 00:19:46.701551 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-02 00:19:46.701570 | orchestrator | Thursday 02 April 2026 00:19:46 +0000 (0:00:01.165) 0:00:09.883 ******** 2026-04-02 00:19:46.701588 | orchestrator | =============================================================================== 2026-04-02 00:19:46.701604 | orchestrator | Gathering Facts --------------------------------------------------------- 3.73s 2026-04-02 00:19:46.701615 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.17s 2026-04-02 00:19:46.701625 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.13s 2026-04-02 00:19:46.701636 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.11s 2026-04-02 00:19:46.701646 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 1.00s 2026-04-02 00:19:46.701657 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.58s 2026-04-02 00:19:46.701684 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.58s 2026-04-02 00:19:46.701696 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.08s 2026-04-02 00:19:46.701706 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2026-04-02 00:19:46.701717 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.08s 2026-04-02 00:19:46.701728 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.07s 2026-04-02 00:19:46.701738 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.06s 2026-04-02 00:19:46.701749 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2026-04-02 00:19:46.886002 | orchestrator | + osism apply sshconfig 2026-04-02 00:19:58.150283 | orchestrator | 2026-04-02 00:19:58 | INFO  | Prepare task for execution of sshconfig. 2026-04-02 00:19:58.236633 | orchestrator | 2026-04-02 00:19:58 | INFO  | Task 4db02b01-3dab-466b-bd9e-b95f186bfa7f (sshconfig) was prepared for execution. 2026-04-02 00:19:58.236729 | orchestrator | 2026-04-02 00:19:58 | INFO  | It takes a moment until task 4db02b01-3dab-466b-bd9e-b95f186bfa7f (sshconfig) has been started and output is visible here. 2026-04-02 00:20:08.566101 | orchestrator | 2026-04-02 00:20:08.566212 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-04-02 00:20:08.566229 | orchestrator | 2026-04-02 00:20:08.566242 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-04-02 00:20:08.566254 | orchestrator | Thursday 02 April 2026 00:20:01 +0000 (0:00:00.175) 0:00:00.175 ******** 2026-04-02 00:20:08.566265 | orchestrator | ok: [testbed-manager] 2026-04-02 00:20:08.566277 | orchestrator | 2026-04-02 00:20:08.566289 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-04-02 00:20:08.566300 | orchestrator | Thursday 02 April 2026 00:20:01 +0000 (0:00:00.855) 0:00:01.030 ******** 2026-04-02 00:20:08.566339 | orchestrator | changed: [testbed-manager] 2026-04-02 00:20:08.566418 | orchestrator | 2026-04-02 00:20:08.566430 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-04-02 00:20:08.566441 | orchestrator | Thursday 02 April 2026 00:20:02 +0000 (0:00:00.474) 0:00:01.505 ******** 2026-04-02 00:20:08.566452 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-04-02 00:20:08.566463 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-04-02 00:20:08.566474 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-04-02 00:20:08.566485 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-04-02 00:20:08.566496 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-04-02 00:20:08.566507 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-04-02 00:20:08.566517 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-04-02 00:20:08.566528 | orchestrator | 2026-04-02 00:20:08.566539 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-04-02 00:20:08.566550 | orchestrator | Thursday 02 April 2026 00:20:07 +0000 (0:00:05.241) 0:00:06.747 ******** 2026-04-02 00:20:08.566561 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:20:08.566571 | orchestrator | 2026-04-02 00:20:08.566582 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-04-02 00:20:08.566596 | orchestrator | Thursday 02 April 2026 00:20:07 +0000 (0:00:00.097) 0:00:06.844 ******** 2026-04-02 00:20:08.566608 | orchestrator | changed: [testbed-manager] 2026-04-02 00:20:08.566621 | orchestrator | 2026-04-02 00:20:08.566634 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-02 00:20:08.566648 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-02 00:20:08.566662 | orchestrator | 2026-04-02 00:20:08.566676 | orchestrator | 2026-04-02 00:20:08.566689 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-02 00:20:08.566704 | orchestrator | Thursday 02 April 2026 00:20:08 +0000 (0:00:00.560) 0:00:07.405 ******** 2026-04-02 00:20:08.566717 | orchestrator | =============================================================================== 2026-04-02 00:20:08.566730 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.24s 2026-04-02 00:20:08.566742 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.86s 2026-04-02 00:20:08.566755 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.56s 2026-04-02 00:20:08.566768 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.47s 2026-04-02 00:20:08.566781 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.10s 2026-04-02 00:20:08.734957 | orchestrator | + osism apply known-hosts 2026-04-02 00:20:20.049893 | orchestrator | 2026-04-02 00:20:20 | INFO  | Prepare task for execution of known-hosts. 2026-04-02 00:20:20.116702 | orchestrator | 2026-04-02 00:20:20 | INFO  | Task 6e949fd4-8c9e-47dc-8660-05315b371003 (known-hosts) was prepared for execution. 2026-04-02 00:20:20.116795 | orchestrator | 2026-04-02 00:20:20 | INFO  | It takes a moment until task 6e949fd4-8c9e-47dc-8660-05315b371003 (known-hosts) has been started and output is visible here. 2026-04-02 00:20:35.260089 | orchestrator | 2026-04-02 00:20:35.260193 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-04-02 00:20:35.260208 | orchestrator | 2026-04-02 00:20:35.260219 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-04-02 00:20:35.260230 | orchestrator | Thursday 02 April 2026 00:20:23 +0000 (0:00:00.210) 0:00:00.210 ******** 2026-04-02 00:20:35.260241 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-04-02 00:20:35.260251 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-04-02 00:20:35.260261 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-04-02 00:20:35.260291 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-04-02 00:20:35.260301 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-04-02 00:20:35.260397 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-04-02 00:20:35.260407 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-04-02 00:20:35.260417 | orchestrator | 2026-04-02 00:20:35.260427 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-04-02 00:20:35.260437 | orchestrator | Thursday 02 April 2026 00:20:29 +0000 (0:00:06.213) 0:00:06.423 ******** 2026-04-02 00:20:35.260459 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-04-02 00:20:35.260472 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-04-02 00:20:35.260483 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-04-02 00:20:35.260492 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-04-02 00:20:35.260502 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-04-02 00:20:35.260512 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-04-02 00:20:35.260522 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-04-02 00:20:35.260532 | orchestrator | 2026-04-02 00:20:35.260542 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-02 00:20:35.260552 | orchestrator | Thursday 02 April 2026 00:20:29 +0000 (0:00:00.170) 0:00:06.593 ******** 2026-04-02 00:20:35.260565 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDKa/SpZpwRwtLySI7ySGqJroKN3SCooGz6dGf1HT+oQLNisPYw/+0m1C5vvDuhed+Ne+6wWXTNyOwkyfft1O6d6K4QeakvWAuzzfx4reo4zTsdKxbgkPkJ+ART36bHeErU/eQ4v8GcU19OL2G2LqmVbxRoN5EsjLNb+wOVlsPNd7BvdyVjPnVk/26ZaCRwj5V8cHlmRbWPPEimXNgtPEFDtBWjB4tuoY5D4t2tkssoIRQZWZycKi4lZ3vB+JXu1MS1wxdfZWC6RHEgIgK6bM+Ch2dk/Y13YBIwdA9c+YFEIr0uzijfB1eGiDA1i/kjAGL+B4jga8Eo5fFJro4SnHvtIy9MsdjiSbSnZGQLfx7o15+tZvXGDtDtLOHS2mUHfUY4ordrX8SCSaEgdf5yi9kVRFhHmu5zFZa4t3QAyIKBY5HuKV9oav0qDsC1Zq1bK0b2qBUNGInTTy/A9NuT0+U1dMIbnwjcZHSw0fbCS6sFbzafq02pZIyyLdckYeMJ8rk=) 2026-04-02 00:20:35.260579 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLGNEDSkQViDnqt9VdVH6dVpc1dDtfiyK+KkqpCwWt/bGlpxR8i8ZQIlOZq+7GrxbUzpA8GF8s7PVstDXDVOCIg=) 2026-04-02 00:20:35.260591 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICUUrnsMO6DTym5TKDLBZcI4pxNk+kmA+JLAYGDupPe/) 2026-04-02 00:20:35.260603 | orchestrator | 2026-04-02 00:20:35.260615 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-02 00:20:35.260626 | orchestrator | Thursday 02 April 2026 00:20:30 +0000 (0:00:01.256) 0:00:07.850 ******** 2026-04-02 00:20:35.260637 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBJIdpZxCT5HpfdWVSJZGuGyN0v41Z8wh3DIHR1dGPr3) 2026-04-02 00:20:35.260679 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCtVjw4x+mkoIGg3DOdvcBF8e2GIbA9FViGP5zx9fBp+R949K6kk0eVPMIGowZGpAMeQ1d7K+g8lfAwhMWIt6L82qcQKb3LCxR8x1zi34MuCJ5kg1YcBQkvdtw1tas/V3xMrDqpZWFqABHW4AlQN4CKhEJKddNTM4Yi69LkaeBO7n/6n22DXhGj5RoigCT2OWkZVDCOGVOvrQ/6yjIsJDCiX50hRx+LdjY/nsQajtBWrFGPWY4vPfaEXxDAhFMXO/p5EGxZVj0CHad0+Q+ER7roXCzrJREWLztRINzRdrYyfI/O8TsRiRKxb5olq/xcvKhVQJSBaM4y7/c+qIB2M9dp7IARIQXybPYW2pTQ7IPbQQQQcZURToW0lT5Ecku0o4YYA+YL3hQ6KeP19V590SLIpwmXqSzKys1VdoZvcpuqq57NlFH1VVMWe+Rlu21wWl+azaY7bDy5ls8ZejsRE7uLTfR+Dl5WDBXB1jtODmBApt2/p6zL/SttLF+VLMgmQQU=) 2026-04-02 00:20:35.260702 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNSSHWBCrEqqGGjXwK4iI9dNHhXn9UZFcOetqy/WiMQME87qLoTBE2fR5HbJYc8aNE2MhzBlKiM2GM+8Obolk8I=) 2026-04-02 00:20:35.260714 | orchestrator | 2026-04-02 00:20:35.260726 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-02 00:20:35.260737 | orchestrator | Thursday 02 April 2026 00:20:31 +0000 (0:00:01.084) 0:00:08.935 ******** 2026-04-02 00:20:35.260749 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDN6Fvnu2fF3kSdDZ0/xl6zcB0/hTrvB3bVmd4ZxjYHelh75u+crvljR3QHnmW5D6+BsOf8OO7/U6NKQFOxTnZMP3S7PQ7NPwqUaVdHWeU0+AQEjVdNoVNe5KsnExTNb2gWX6YZWHg5ZNugKnioTQa92QxBimHZ8G5AGU2l+QS37W5DeOy91xRFo78c/AXXka3mMBK8yuPIdUbT9Vo0gb7/d8bGUfwPDN4RIdT9JEYClD6trOqxVWpCZnybjWweBjQljucxrS/u0BK6i+BTjNXowoP6t88g6YXZ8xgRAXu+YGCLKWmSGRZwCA68OtU2BxVGs0KVSD05b/yaOxSWSrygzdhgwDDvlUO2vYySjQ6WtITX/ul0aZC9GuODFAfEZh6IoO72dxKZbEC0Yqdc2EcFjLkQ0O5CojqMtYX7QOGUUwSktUFODLMCv/EBtuIuj+YoBO3eP5ROBK3XhhsrtPS49qmvQJrbQ1V57c7HM1HRETVhkikLkFykySFpVlJHy+U=) 2026-04-02 00:20:35.260761 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFkNYYfcLqHoY/TcTMhWDf4hdrBKnCFCD+mVGq7u/uYVPGSxTnJsJLG3vAfpQNlPQScpElbGJfjTs0hlXc5prs4=) 2026-04-02 00:20:35.260842 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIM7DztO1hISOTsJhL/IWpc+uHcpZ6d/A1nEC2Cl0XpZK) 2026-04-02 00:20:35.260855 | orchestrator | 2026-04-02 00:20:35.260866 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-02 00:20:35.260878 | orchestrator | Thursday 02 April 2026 00:20:32 +0000 (0:00:00.997) 0:00:09.932 ******** 2026-04-02 00:20:35.260893 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIA4fOAqbhj8cE4Cd6D7zeOKyPlYtZjG2vJDcyEKQDnel) 2026-04-02 00:20:35.260904 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCvpdv3/Rua9w0tPTJ/GvkajXLTI8jKyIbuo2OI//u2c5oV3In5IgKWgeoSlp+f6MrPNakhpj2NQb8v9Ph8PdPOiWN6ib8ZLedNXcfMpO0bKd1ea6uz+1mAwqT6fs+0l1sML4zbNcGsE0+rsjvq5OQQTAeJsBpEfzY8Axh5ODgKYoYHPZ7h5jJ2gaaa5AejtrmOEms3PqR5fKj0XqI64SesuH5S/4yesj0Yn3eYPZyUK2faOPYoetrWQYlH6q+XMA656yMZGLJwp3DwJaHUPU/QhBr8A4s8g7RSWm4MdEyWn0Cr5A2CpnBeq3rjJEta8dOcckzZwlPIqqicv9qq/du80InI0GeO745jJFj1CqWZC6cC+oLPcanqSW9MhwUSx6mi2oLALV8v7+1aJckNOSJg3d2+jgrp9Wro8h4a3zSJWXiSz7PNybDJ7kHS0HXVCxQW1a8N5DggPwTFIHq2c77ocZj4lWoGX1BUXypT0ZdVKbSO2+d0YaShG2p8uNYjxGM=) 2026-04-02 00:20:35.260914 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAdMn6rkGL/Vzv82BV8EZF53gQpi3Kx2eYYsjghtLFldGCpW4KHbx0Zxo5uOY/FoUWZMjXg5G+TVBER2H4vgtGs=) 2026-04-02 00:20:35.260924 | orchestrator | 2026-04-02 00:20:35.260934 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-02 00:20:35.260943 | orchestrator | Thursday 02 April 2026 00:20:33 +0000 (0:00:01.035) 0:00:10.967 ******** 2026-04-02 00:20:35.260953 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNcGAd1YkUGscOmTTdtuzHss8XpTPS4f4hiVBwMe7VwhpfYNpp+i49FN2tygnvZwAwPNsUKmIan2pZdzCGzmf04=) 2026-04-02 00:20:35.260964 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCuQxWaEr58WegrdWnaoGD0d1xSinV0qm4y1SM2Gbj9ADhzBgpE8RGdQjvyet2QRRDEOL4dg97m4Q16nsxZOlv36/3gbz3hQd7ASfWJpKazWtbUWaPDjn7Hs7LXazKUXwA+L6rtU6qOj4JW7qzjuM/1S2d1Hs6y0LEVDx9BC/OohFO+z/ufgGbw3UbZuMl+4Ykqg83F4+JU03UV64jaGVWC35ARZU0aLr9/mOvgmfZhNXNA0epXAGgxJhLbapMZmu/jbeMb6JQee2vWHExSLKsvCw2I5ADAb4LJ64CvAkl4BfRN8rVmSt3IngwDsu6mKh4Ndu91QmwPqMzafKjeTudrEHAcB8JjkBZH+Rbvgl/b+89TgV0xYfTDEtwpEyBr4KJjluO7uVehn8t6h9JeTMOJSrbtUpN8+OXXrNi8i9XXoxotN+9KuXRGaRdqr24UCXJnIkrNq+3HgqapXPCymU5Wez7Y5CRZNu/eF5l9m5KDflZXAT+/HAXxM1Hb2ADUA2U=) 2026-04-02 00:20:35.260980 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILhDYicDO3LPAsIhurOaZmcFzskExYB21FgVjCNR+u5z) 2026-04-02 00:20:35.260990 | orchestrator | 2026-04-02 00:20:35.260999 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-02 00:20:35.261009 | orchestrator | Thursday 02 April 2026 00:20:34 +0000 (0:00:01.067) 0:00:12.035 ******** 2026-04-02 00:20:35.261025 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKByDfiir7Uw5O1obQo3phXLGViZ1AmSpM0VAZblKzpFxsulwgJtgkVDHgY9jKM4ePjds/I5f5+AQNJhFvFHkgE=) 2026-04-02 00:20:45.787239 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC0oEWSmXlJRtvCoJnYqD9rSZ1AWZE/+Tr7g16euEEB99gCfPNFpjwlr1Ll7/jqsxZichlRWnbnfzaMVXbxcyY5vZUx1ChySBTXz95oc/YIP5ZEx3wLXaw0eqdHezsjqBvuGbp1aq1QtUqBuPI5gGVwWaMiojR5X1CYW0wg0teRe0K60pzI7tIqbIm5HwVYEPcHoqkwEBV49ufbZVM8Zk5hP0UBO6gRn4AImvzIrjAR8LuF5BhRhj7oajdFiI15cO9QwV9Kw9JFSo+ZCEOMrnoXGTHU4n4csZHiznNMxlSXok8tqQOICHbcZhoWCU7pQaZnffgEKzrKPFtM/9Oj1g/vbrEUhW5YxYs08ufKZ0vveHM2s4vB6p7bsqaEyK6ricDKQz09pp2pprpXvKpRMwpEQqzP+1L+ae0uFoH8j9ndrxw3kSeVA++ppZ+s8CV1R3B9eiqnOIgVRM4vW2/eOb/ej8dxf1A/9bqMS4Tt+noOBai3JyRwXUamJtuTjTuUmHU=) 2026-04-02 00:20:45.787418 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEbuFo80eyB5ZeRg6ccYagnsF5tHJogIR/vp9bntHDeE) 2026-04-02 00:20:45.787440 | orchestrator | 2026-04-02 00:20:45.787453 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-02 00:20:45.787466 | orchestrator | Thursday 02 April 2026 00:20:35 +0000 (0:00:01.054) 0:00:13.089 ******** 2026-04-02 00:20:45.787478 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBC3ya+CL9z1RvIkvvSdG2iyCxvQgTqWyg2Kwodrg59l0L47vMC+32xZjG7gqlLthdATDQEA8haERu349ez94838=) 2026-04-02 00:20:45.787491 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCOiTQX1dmaiDzqNPpaQrnCJhbIE/TQN5p0GRl8xJSeY01T7BGyOjVSPtshL5Kk/ffbIVVZjE6If06pL/twXASFAML2rp3cIiIF9DfrIMMsrjsHvphbZPyU7XJTmDDlzAbH/f2h6u1WZN45aJEOXetyg8r03as89tDW6JRONeM1QrJRql44Cbl844NMu7Cab9L8jI45Rh00uTQ6OMxu7jWiozwjy0nmzbq7LiEtFP1ltNb0831TpbHC4QA9kb+2Wj4z33QxZJOjsyEODM8l3Ay8tsaFhNwWpmel4e96pCpmhUyqS6X2ykOVSrJjqR4LGUDcwYWAM3ZxPDoQL63C1HBB9/V3V4UfvNDmFY44sDny7AiOlThCLsqKZTnlHWAzxSKpkXI0bnHRHUhHN1vj5TMPcNjlzn2sqOmfcbsKULoe8tl2tB4iq/l9SC4HqwY8dXL5QCxbmSvTZCiCu2skyMhQAluM3vqYPl6JboL5nHGDF8URpeRODroPK6eVkaevw1s=) 2026-04-02 00:20:45.787503 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIgeX5O1EQUoUdk9eSZdE2j8P+zfCcwCp5DZCBPruQZr) 2026-04-02 00:20:45.787514 | orchestrator | 2026-04-02 00:20:45.787526 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-04-02 00:20:45.787538 | orchestrator | Thursday 02 April 2026 00:20:36 +0000 (0:00:01.029) 0:00:14.119 ******** 2026-04-02 00:20:45.787550 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-04-02 00:20:45.787561 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-04-02 00:20:45.787572 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-04-02 00:20:45.787583 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-04-02 00:20:45.787594 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-04-02 00:20:45.787624 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-04-02 00:20:45.787635 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-04-02 00:20:45.787669 | orchestrator | 2026-04-02 00:20:45.787681 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-04-02 00:20:45.787693 | orchestrator | Thursday 02 April 2026 00:20:42 +0000 (0:00:05.323) 0:00:19.443 ******** 2026-04-02 00:20:45.787706 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-04-02 00:20:45.787718 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-04-02 00:20:45.787729 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-04-02 00:20:45.787740 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-04-02 00:20:45.787751 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-04-02 00:20:45.787762 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-04-02 00:20:45.787776 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-04-02 00:20:45.787789 | orchestrator | 2026-04-02 00:20:45.787818 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-02 00:20:45.787832 | orchestrator | Thursday 02 April 2026 00:20:42 +0000 (0:00:00.156) 0:00:19.599 ******** 2026-04-02 00:20:45.787844 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLGNEDSkQViDnqt9VdVH6dVpc1dDtfiyK+KkqpCwWt/bGlpxR8i8ZQIlOZq+7GrxbUzpA8GF8s7PVstDXDVOCIg=) 2026-04-02 00:20:45.787862 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDKa/SpZpwRwtLySI7ySGqJroKN3SCooGz6dGf1HT+oQLNisPYw/+0m1C5vvDuhed+Ne+6wWXTNyOwkyfft1O6d6K4QeakvWAuzzfx4reo4zTsdKxbgkPkJ+ART36bHeErU/eQ4v8GcU19OL2G2LqmVbxRoN5EsjLNb+wOVlsPNd7BvdyVjPnVk/26ZaCRwj5V8cHlmRbWPPEimXNgtPEFDtBWjB4tuoY5D4t2tkssoIRQZWZycKi4lZ3vB+JXu1MS1wxdfZWC6RHEgIgK6bM+Ch2dk/Y13YBIwdA9c+YFEIr0uzijfB1eGiDA1i/kjAGL+B4jga8Eo5fFJro4SnHvtIy9MsdjiSbSnZGQLfx7o15+tZvXGDtDtLOHS2mUHfUY4ordrX8SCSaEgdf5yi9kVRFhHmu5zFZa4t3QAyIKBY5HuKV9oav0qDsC1Zq1bK0b2qBUNGInTTy/A9NuT0+U1dMIbnwjcZHSw0fbCS6sFbzafq02pZIyyLdckYeMJ8rk=) 2026-04-02 00:20:45.787875 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICUUrnsMO6DTym5TKDLBZcI4pxNk+kmA+JLAYGDupPe/) 2026-04-02 00:20:45.787888 | orchestrator | 2026-04-02 00:20:45.787901 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-02 00:20:45.787914 | orchestrator | Thursday 02 April 2026 00:20:43 +0000 (0:00:00.988) 0:00:20.587 ******** 2026-04-02 00:20:45.787927 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCtVjw4x+mkoIGg3DOdvcBF8e2GIbA9FViGP5zx9fBp+R949K6kk0eVPMIGowZGpAMeQ1d7K+g8lfAwhMWIt6L82qcQKb3LCxR8x1zi34MuCJ5kg1YcBQkvdtw1tas/V3xMrDqpZWFqABHW4AlQN4CKhEJKddNTM4Yi69LkaeBO7n/6n22DXhGj5RoigCT2OWkZVDCOGVOvrQ/6yjIsJDCiX50hRx+LdjY/nsQajtBWrFGPWY4vPfaEXxDAhFMXO/p5EGxZVj0CHad0+Q+ER7roXCzrJREWLztRINzRdrYyfI/O8TsRiRKxb5olq/xcvKhVQJSBaM4y7/c+qIB2M9dp7IARIQXybPYW2pTQ7IPbQQQQcZURToW0lT5Ecku0o4YYA+YL3hQ6KeP19V590SLIpwmXqSzKys1VdoZvcpuqq57NlFH1VVMWe+Rlu21wWl+azaY7bDy5ls8ZejsRE7uLTfR+Dl5WDBXB1jtODmBApt2/p6zL/SttLF+VLMgmQQU=) 2026-04-02 00:20:45.787948 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNSSHWBCrEqqGGjXwK4iI9dNHhXn9UZFcOetqy/WiMQME87qLoTBE2fR5HbJYc8aNE2MhzBlKiM2GM+8Obolk8I=) 2026-04-02 00:20:45.787963 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBJIdpZxCT5HpfdWVSJZGuGyN0v41Z8wh3DIHR1dGPr3) 2026-04-02 00:20:45.787976 | orchestrator | 2026-04-02 00:20:45.787989 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-02 00:20:45.788001 | orchestrator | Thursday 02 April 2026 00:20:44 +0000 (0:00:00.976) 0:00:21.564 ******** 2026-04-02 00:20:45.788014 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIM7DztO1hISOTsJhL/IWpc+uHcpZ6d/A1nEC2Cl0XpZK) 2026-04-02 00:20:45.788027 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDN6Fvnu2fF3kSdDZ0/xl6zcB0/hTrvB3bVmd4ZxjYHelh75u+crvljR3QHnmW5D6+BsOf8OO7/U6NKQFOxTnZMP3S7PQ7NPwqUaVdHWeU0+AQEjVdNoVNe5KsnExTNb2gWX6YZWHg5ZNugKnioTQa92QxBimHZ8G5AGU2l+QS37W5DeOy91xRFo78c/AXXka3mMBK8yuPIdUbT9Vo0gb7/d8bGUfwPDN4RIdT9JEYClD6trOqxVWpCZnybjWweBjQljucxrS/u0BK6i+BTjNXowoP6t88g6YXZ8xgRAXu+YGCLKWmSGRZwCA68OtU2BxVGs0KVSD05b/yaOxSWSrygzdhgwDDvlUO2vYySjQ6WtITX/ul0aZC9GuODFAfEZh6IoO72dxKZbEC0Yqdc2EcFjLkQ0O5CojqMtYX7QOGUUwSktUFODLMCv/EBtuIuj+YoBO3eP5ROBK3XhhsrtPS49qmvQJrbQ1V57c7HM1HRETVhkikLkFykySFpVlJHy+U=) 2026-04-02 00:20:45.788041 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFkNYYfcLqHoY/TcTMhWDf4hdrBKnCFCD+mVGq7u/uYVPGSxTnJsJLG3vAfpQNlPQScpElbGJfjTs0hlXc5prs4=) 2026-04-02 00:20:45.788054 | orchestrator | 2026-04-02 00:20:45.788067 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-02 00:20:45.788079 | orchestrator | Thursday 02 April 2026 00:20:45 +0000 (0:00:00.998) 0:00:22.563 ******** 2026-04-02 00:20:45.788107 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCvpdv3/Rua9w0tPTJ/GvkajXLTI8jKyIbuo2OI//u2c5oV3In5IgKWgeoSlp+f6MrPNakhpj2NQb8v9Ph8PdPOiWN6ib8ZLedNXcfMpO0bKd1ea6uz+1mAwqT6fs+0l1sML4zbNcGsE0+rsjvq5OQQTAeJsBpEfzY8Axh5ODgKYoYHPZ7h5jJ2gaaa5AejtrmOEms3PqR5fKj0XqI64SesuH5S/4yesj0Yn3eYPZyUK2faOPYoetrWQYlH6q+XMA656yMZGLJwp3DwJaHUPU/QhBr8A4s8g7RSWm4MdEyWn0Cr5A2CpnBeq3rjJEta8dOcckzZwlPIqqicv9qq/du80InI0GeO745jJFj1CqWZC6cC+oLPcanqSW9MhwUSx6mi2oLALV8v7+1aJckNOSJg3d2+jgrp9Wro8h4a3zSJWXiSz7PNybDJ7kHS0HXVCxQW1a8N5DggPwTFIHq2c77ocZj4lWoGX1BUXypT0ZdVKbSO2+d0YaShG2p8uNYjxGM=) 2026-04-02 00:20:50.443826 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAdMn6rkGL/Vzv82BV8EZF53gQpi3Kx2eYYsjghtLFldGCpW4KHbx0Zxo5uOY/FoUWZMjXg5G+TVBER2H4vgtGs=) 2026-04-02 00:20:50.443929 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIA4fOAqbhj8cE4Cd6D7zeOKyPlYtZjG2vJDcyEKQDnel) 2026-04-02 00:20:50.443946 | orchestrator | 2026-04-02 00:20:50.443959 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-02 00:20:50.443971 | orchestrator | Thursday 02 April 2026 00:20:46 +0000 (0:00:00.994) 0:00:23.557 ******** 2026-04-02 00:20:50.443984 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCuQxWaEr58WegrdWnaoGD0d1xSinV0qm4y1SM2Gbj9ADhzBgpE8RGdQjvyet2QRRDEOL4dg97m4Q16nsxZOlv36/3gbz3hQd7ASfWJpKazWtbUWaPDjn7Hs7LXazKUXwA+L6rtU6qOj4JW7qzjuM/1S2d1Hs6y0LEVDx9BC/OohFO+z/ufgGbw3UbZuMl+4Ykqg83F4+JU03UV64jaGVWC35ARZU0aLr9/mOvgmfZhNXNA0epXAGgxJhLbapMZmu/jbeMb6JQee2vWHExSLKsvCw2I5ADAb4LJ64CvAkl4BfRN8rVmSt3IngwDsu6mKh4Ndu91QmwPqMzafKjeTudrEHAcB8JjkBZH+Rbvgl/b+89TgV0xYfTDEtwpEyBr4KJjluO7uVehn8t6h9JeTMOJSrbtUpN8+OXXrNi8i9XXoxotN+9KuXRGaRdqr24UCXJnIkrNq+3HgqapXPCymU5Wez7Y5CRZNu/eF5l9m5KDflZXAT+/HAXxM1Hb2ADUA2U=) 2026-04-02 00:20:50.443998 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNcGAd1YkUGscOmTTdtuzHss8XpTPS4f4hiVBwMe7VwhpfYNpp+i49FN2tygnvZwAwPNsUKmIan2pZdzCGzmf04=) 2026-04-02 00:20:50.444033 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILhDYicDO3LPAsIhurOaZmcFzskExYB21FgVjCNR+u5z) 2026-04-02 00:20:50.444045 | orchestrator | 2026-04-02 00:20:50.444072 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-02 00:20:50.444084 | orchestrator | Thursday 02 April 2026 00:20:47 +0000 (0:00:01.029) 0:00:24.587 ******** 2026-04-02 00:20:50.444095 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKByDfiir7Uw5O1obQo3phXLGViZ1AmSpM0VAZblKzpFxsulwgJtgkVDHgY9jKM4ePjds/I5f5+AQNJhFvFHkgE=) 2026-04-02 00:20:50.444107 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC0oEWSmXlJRtvCoJnYqD9rSZ1AWZE/+Tr7g16euEEB99gCfPNFpjwlr1Ll7/jqsxZichlRWnbnfzaMVXbxcyY5vZUx1ChySBTXz95oc/YIP5ZEx3wLXaw0eqdHezsjqBvuGbp1aq1QtUqBuPI5gGVwWaMiojR5X1CYW0wg0teRe0K60pzI7tIqbIm5HwVYEPcHoqkwEBV49ufbZVM8Zk5hP0UBO6gRn4AImvzIrjAR8LuF5BhRhj7oajdFiI15cO9QwV9Kw9JFSo+ZCEOMrnoXGTHU4n4csZHiznNMxlSXok8tqQOICHbcZhoWCU7pQaZnffgEKzrKPFtM/9Oj1g/vbrEUhW5YxYs08ufKZ0vveHM2s4vB6p7bsqaEyK6ricDKQz09pp2pprpXvKpRMwpEQqzP+1L+ae0uFoH8j9ndrxw3kSeVA++ppZ+s8CV1R3B9eiqnOIgVRM4vW2/eOb/ej8dxf1A/9bqMS4Tt+noOBai3JyRwXUamJtuTjTuUmHU=) 2026-04-02 00:20:50.444119 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEbuFo80eyB5ZeRg6ccYagnsF5tHJogIR/vp9bntHDeE) 2026-04-02 00:20:50.444130 | orchestrator | 2026-04-02 00:20:50.444142 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-02 00:20:50.444152 | orchestrator | Thursday 02 April 2026 00:20:48 +0000 (0:00:01.023) 0:00:25.610 ******** 2026-04-02 00:20:50.444164 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCOiTQX1dmaiDzqNPpaQrnCJhbIE/TQN5p0GRl8xJSeY01T7BGyOjVSPtshL5Kk/ffbIVVZjE6If06pL/twXASFAML2rp3cIiIF9DfrIMMsrjsHvphbZPyU7XJTmDDlzAbH/f2h6u1WZN45aJEOXetyg8r03as89tDW6JRONeM1QrJRql44Cbl844NMu7Cab9L8jI45Rh00uTQ6OMxu7jWiozwjy0nmzbq7LiEtFP1ltNb0831TpbHC4QA9kb+2Wj4z33QxZJOjsyEODM8l3Ay8tsaFhNwWpmel4e96pCpmhUyqS6X2ykOVSrJjqR4LGUDcwYWAM3ZxPDoQL63C1HBB9/V3V4UfvNDmFY44sDny7AiOlThCLsqKZTnlHWAzxSKpkXI0bnHRHUhHN1vj5TMPcNjlzn2sqOmfcbsKULoe8tl2tB4iq/l9SC4HqwY8dXL5QCxbmSvTZCiCu2skyMhQAluM3vqYPl6JboL5nHGDF8URpeRODroPK6eVkaevw1s=) 2026-04-02 00:20:50.444175 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBC3ya+CL9z1RvIkvvSdG2iyCxvQgTqWyg2Kwodrg59l0L47vMC+32xZjG7gqlLthdATDQEA8haERu349ez94838=) 2026-04-02 00:20:50.444187 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIgeX5O1EQUoUdk9eSZdE2j8P+zfCcwCp5DZCBPruQZr) 2026-04-02 00:20:50.444198 | orchestrator | 2026-04-02 00:20:50.444209 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-04-02 00:20:50.444220 | orchestrator | Thursday 02 April 2026 00:20:49 +0000 (0:00:01.027) 0:00:26.637 ******** 2026-04-02 00:20:50.444232 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-04-02 00:20:50.444243 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-04-02 00:20:50.444272 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-04-02 00:20:50.444316 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-04-02 00:20:50.444328 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-04-02 00:20:50.444339 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-04-02 00:20:50.444349 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-04-02 00:20:50.444361 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:20:50.444374 | orchestrator | 2026-04-02 00:20:50.444387 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-04-02 00:20:50.444399 | orchestrator | Thursday 02 April 2026 00:20:49 +0000 (0:00:00.175) 0:00:26.813 ******** 2026-04-02 00:20:50.444420 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:20:50.444433 | orchestrator | 2026-04-02 00:20:50.444446 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-04-02 00:20:50.444459 | orchestrator | Thursday 02 April 2026 00:20:49 +0000 (0:00:00.048) 0:00:26.861 ******** 2026-04-02 00:20:50.444472 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:20:50.444484 | orchestrator | 2026-04-02 00:20:50.444496 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-04-02 00:20:50.444509 | orchestrator | Thursday 02 April 2026 00:20:49 +0000 (0:00:00.048) 0:00:26.909 ******** 2026-04-02 00:20:50.444522 | orchestrator | changed: [testbed-manager] 2026-04-02 00:20:50.444534 | orchestrator | 2026-04-02 00:20:50.444546 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-02 00:20:50.444560 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-02 00:20:50.444574 | orchestrator | 2026-04-02 00:20:50.444586 | orchestrator | 2026-04-02 00:20:50.444599 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-02 00:20:50.444612 | orchestrator | Thursday 02 April 2026 00:20:50 +0000 (0:00:00.487) 0:00:27.397 ******** 2026-04-02 00:20:50.444625 | orchestrator | =============================================================================== 2026-04-02 00:20:50.444638 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.21s 2026-04-02 00:20:50.444650 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.32s 2026-04-02 00:20:50.444664 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.26s 2026-04-02 00:20:50.444676 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2026-04-02 00:20:50.444689 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2026-04-02 00:20:50.444702 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2026-04-02 00:20:50.444715 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-04-02 00:20:50.444726 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-04-02 00:20:50.444736 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-04-02 00:20:50.444747 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-04-02 00:20:50.444758 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2026-04-02 00:20:50.444777 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2026-04-02 00:20:50.444788 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2026-04-02 00:20:50.444799 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.99s 2026-04-02 00:20:50.444810 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.99s 2026-04-02 00:20:50.444821 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.98s 2026-04-02 00:20:50.444831 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.49s 2026-04-02 00:20:50.444842 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.18s 2026-04-02 00:20:50.444853 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.17s 2026-04-02 00:20:50.444864 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.16s 2026-04-02 00:20:50.633539 | orchestrator | + osism apply squid 2026-04-02 00:21:01.975537 | orchestrator | 2026-04-02 00:21:01 | INFO  | Prepare task for execution of squid. 2026-04-02 00:21:02.049222 | orchestrator | 2026-04-02 00:21:02 | INFO  | Task 85c57788-8761-417f-bf26-3e15ac8a2e2a (squid) was prepared for execution. 2026-04-02 00:21:02.049314 | orchestrator | 2026-04-02 00:21:02 | INFO  | It takes a moment until task 85c57788-8761-417f-bf26-3e15ac8a2e2a (squid) has been started and output is visible here. 2026-04-02 00:22:54.246875 | orchestrator | 2026-04-02 00:22:54.246969 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-04-02 00:22:54.246982 | orchestrator | 2026-04-02 00:22:54.246993 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-04-02 00:22:54.247002 | orchestrator | Thursday 02 April 2026 00:21:05 +0000 (0:00:00.195) 0:00:00.195 ******** 2026-04-02 00:22:54.247012 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-04-02 00:22:54.247023 | orchestrator | 2026-04-02 00:22:54.247031 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-04-02 00:22:54.247040 | orchestrator | Thursday 02 April 2026 00:21:05 +0000 (0:00:00.087) 0:00:00.283 ******** 2026-04-02 00:22:54.247049 | orchestrator | ok: [testbed-manager] 2026-04-02 00:22:54.247059 | orchestrator | 2026-04-02 00:22:54.247068 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-04-02 00:22:54.247077 | orchestrator | Thursday 02 April 2026 00:21:07 +0000 (0:00:02.317) 0:00:02.600 ******** 2026-04-02 00:22:54.247086 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-04-02 00:22:54.247095 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-04-02 00:22:54.247141 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-04-02 00:22:54.247156 | orchestrator | 2026-04-02 00:22:54.247169 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-04-02 00:22:54.247183 | orchestrator | Thursday 02 April 2026 00:21:08 +0000 (0:00:01.216) 0:00:03.817 ******** 2026-04-02 00:22:54.247196 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-04-02 00:22:54.247211 | orchestrator | 2026-04-02 00:22:54.247225 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-04-02 00:22:54.247238 | orchestrator | Thursday 02 April 2026 00:21:09 +0000 (0:00:00.999) 0:00:04.817 ******** 2026-04-02 00:22:54.247252 | orchestrator | ok: [testbed-manager] 2026-04-02 00:22:54.247265 | orchestrator | 2026-04-02 00:22:54.247281 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-04-02 00:22:54.247296 | orchestrator | Thursday 02 April 2026 00:21:10 +0000 (0:00:00.336) 0:00:05.153 ******** 2026-04-02 00:22:54.247306 | orchestrator | changed: [testbed-manager] 2026-04-02 00:22:54.247315 | orchestrator | 2026-04-02 00:22:54.247324 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-04-02 00:22:54.247333 | orchestrator | Thursday 02 April 2026 00:21:11 +0000 (0:00:00.848) 0:00:06.002 ******** 2026-04-02 00:22:54.247342 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-04-02 00:22:54.247351 | orchestrator | ok: [testbed-manager] 2026-04-02 00:22:54.247360 | orchestrator | 2026-04-02 00:22:54.247368 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-04-02 00:22:54.247377 | orchestrator | Thursday 02 April 2026 00:21:41 +0000 (0:00:30.426) 0:00:36.429 ******** 2026-04-02 00:22:54.247386 | orchestrator | changed: [testbed-manager] 2026-04-02 00:22:54.247394 | orchestrator | 2026-04-02 00:22:54.247403 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-04-02 00:22:54.247412 | orchestrator | Thursday 02 April 2026 00:21:53 +0000 (0:00:11.938) 0:00:48.368 ******** 2026-04-02 00:22:54.247421 | orchestrator | Pausing for 60 seconds 2026-04-02 00:22:54.247430 | orchestrator | changed: [testbed-manager] 2026-04-02 00:22:54.247441 | orchestrator | 2026-04-02 00:22:54.247452 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-04-02 00:22:54.247462 | orchestrator | Thursday 02 April 2026 00:22:53 +0000 (0:01:00.077) 0:01:48.445 ******** 2026-04-02 00:22:54.247472 | orchestrator | ok: [testbed-manager] 2026-04-02 00:22:54.247482 | orchestrator | 2026-04-02 00:22:54.247492 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-04-02 00:22:54.247526 | orchestrator | Thursday 02 April 2026 00:22:53 +0000 (0:00:00.059) 0:01:48.505 ******** 2026-04-02 00:22:54.247535 | orchestrator | changed: [testbed-manager] 2026-04-02 00:22:54.247543 | orchestrator | 2026-04-02 00:22:54.247552 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-02 00:22:54.247561 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-02 00:22:54.247570 | orchestrator | 2026-04-02 00:22:54.247578 | orchestrator | 2026-04-02 00:22:54.247588 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-02 00:22:54.247596 | orchestrator | Thursday 02 April 2026 00:22:54 +0000 (0:00:00.533) 0:01:49.038 ******** 2026-04-02 00:22:54.247605 | orchestrator | =============================================================================== 2026-04-02 00:22:54.247614 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.08s 2026-04-02 00:22:54.247622 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 30.43s 2026-04-02 00:22:54.247631 | orchestrator | osism.services.squid : Restart squid service --------------------------- 11.94s 2026-04-02 00:22:54.247639 | orchestrator | osism.services.squid : Install required packages ------------------------ 2.32s 2026-04-02 00:22:54.247647 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.22s 2026-04-02 00:22:54.247656 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.00s 2026-04-02 00:22:54.247664 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.85s 2026-04-02 00:22:54.247673 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.53s 2026-04-02 00:22:54.247681 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.34s 2026-04-02 00:22:54.247690 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.09s 2026-04-02 00:22:54.247699 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.06s 2026-04-02 00:22:54.361514 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-02 00:22:54.361592 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla 2026-04-02 00:22:54.365028 | orchestrator | + set -e 2026-04-02 00:22:54.365063 | orchestrator | + NAMESPACE=kolla 2026-04-02 00:22:54.365077 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-04-02 00:22:54.370745 | orchestrator | ++ semver latest 9.0.0 2026-04-02 00:22:54.418568 | orchestrator | + [[ -1 -lt 0 ]] 2026-04-02 00:22:54.418655 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-02 00:22:54.419068 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-04-02 00:23:05.605436 | orchestrator | 2026-04-02 00:23:05 | INFO  | Prepare task for execution of operator. 2026-04-02 00:23:05.674704 | orchestrator | 2026-04-02 00:23:05 | INFO  | Task a041a6be-e318-42be-8ae0-a66a74b1f6ee (operator) was prepared for execution. 2026-04-02 00:23:05.674791 | orchestrator | 2026-04-02 00:23:05 | INFO  | It takes a moment until task a041a6be-e318-42be-8ae0-a66a74b1f6ee (operator) has been started and output is visible here. 2026-04-02 00:23:20.599943 | orchestrator | 2026-04-02 00:23:20.600126 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-04-02 00:23:20.600148 | orchestrator | 2026-04-02 00:23:20.600161 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-02 00:23:20.600173 | orchestrator | Thursday 02 April 2026 00:23:08 +0000 (0:00:00.164) 0:00:00.164 ******** 2026-04-02 00:23:20.600184 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:23:20.600197 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:23:20.600209 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:23:20.600220 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:23:20.600231 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:23:20.600241 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:23:20.600256 | orchestrator | 2026-04-02 00:23:20.600267 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-04-02 00:23:20.600302 | orchestrator | Thursday 02 April 2026 00:23:11 +0000 (0:00:03.412) 0:00:03.576 ******** 2026-04-02 00:23:20.600314 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:23:20.600325 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:23:20.600335 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:23:20.600346 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:23:20.600357 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:23:20.600368 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:23:20.600379 | orchestrator | 2026-04-02 00:23:20.600390 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-04-02 00:23:20.600401 | orchestrator | 2026-04-02 00:23:20.600412 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-04-02 00:23:20.600423 | orchestrator | Thursday 02 April 2026 00:23:12 +0000 (0:00:00.792) 0:00:04.369 ******** 2026-04-02 00:23:20.600433 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:23:20.600445 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:23:20.600458 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:23:20.600470 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:23:20.600482 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:23:20.600494 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:23:20.600506 | orchestrator | 2026-04-02 00:23:20.600519 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-04-02 00:23:20.600551 | orchestrator | Thursday 02 April 2026 00:23:12 +0000 (0:00:00.153) 0:00:04.522 ******** 2026-04-02 00:23:20.600570 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:23:20.600582 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:23:20.600595 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:23:20.600608 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:23:20.600620 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:23:20.600633 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:23:20.600646 | orchestrator | 2026-04-02 00:23:20.600658 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-04-02 00:23:20.600670 | orchestrator | Thursday 02 April 2026 00:23:12 +0000 (0:00:00.145) 0:00:04.668 ******** 2026-04-02 00:23:20.600683 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:23:20.600696 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:23:20.600708 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:23:20.600720 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:23:20.600733 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:23:20.600745 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:23:20.600757 | orchestrator | 2026-04-02 00:23:20.600770 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-04-02 00:23:20.600783 | orchestrator | Thursday 02 April 2026 00:23:13 +0000 (0:00:00.723) 0:00:05.391 ******** 2026-04-02 00:23:20.600796 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:23:20.600809 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:23:20.600820 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:23:20.600831 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:23:20.600841 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:23:20.600852 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:23:20.600863 | orchestrator | 2026-04-02 00:23:20.600875 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-04-02 00:23:20.600886 | orchestrator | Thursday 02 April 2026 00:23:14 +0000 (0:00:00.908) 0:00:06.299 ******** 2026-04-02 00:23:20.600897 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-04-02 00:23:20.600908 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-04-02 00:23:20.600919 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-04-02 00:23:20.600930 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-04-02 00:23:20.600941 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-04-02 00:23:20.600952 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-04-02 00:23:20.600962 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-04-02 00:23:20.600973 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-04-02 00:23:20.600984 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-04-02 00:23:20.601003 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-04-02 00:23:20.601014 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-04-02 00:23:20.601024 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-04-02 00:23:20.601035 | orchestrator | 2026-04-02 00:23:20.601046 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-04-02 00:23:20.601057 | orchestrator | Thursday 02 April 2026 00:23:15 +0000 (0:00:01.229) 0:00:07.529 ******** 2026-04-02 00:23:20.601068 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:23:20.601101 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:23:20.601112 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:23:20.601123 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:23:20.601134 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:23:20.601145 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:23:20.601156 | orchestrator | 2026-04-02 00:23:20.601167 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-04-02 00:23:20.601179 | orchestrator | Thursday 02 April 2026 00:23:17 +0000 (0:00:01.333) 0:00:08.862 ******** 2026-04-02 00:23:20.601190 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-04-02 00:23:20.601201 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-04-02 00:23:20.601212 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-04-02 00:23:20.601223 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-04-02 00:23:20.601234 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-04-02 00:23:20.601265 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-04-02 00:23:20.601277 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-04-02 00:23:20.601288 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-04-02 00:23:20.601299 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-04-02 00:23:20.601309 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-04-02 00:23:20.601320 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-04-02 00:23:20.601331 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-04-02 00:23:20.601342 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-04-02 00:23:20.601353 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-04-02 00:23:20.601364 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-04-02 00:23:20.601374 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-04-02 00:23:20.601385 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-04-02 00:23:20.601396 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-04-02 00:23:20.601407 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-04-02 00:23:20.601418 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-04-02 00:23:20.601429 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-04-02 00:23:20.601440 | orchestrator | 2026-04-02 00:23:20.601451 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-04-02 00:23:20.601462 | orchestrator | Thursday 02 April 2026 00:23:18 +0000 (0:00:01.292) 0:00:10.154 ******** 2026-04-02 00:23:20.601473 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:23:20.601484 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:23:20.601495 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:23:20.601505 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:23:20.601516 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:23:20.601527 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:23:20.601538 | orchestrator | 2026-04-02 00:23:20.601549 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-04-02 00:23:20.601566 | orchestrator | Thursday 02 April 2026 00:23:18 +0000 (0:00:00.168) 0:00:10.323 ******** 2026-04-02 00:23:20.601577 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:23:20.601588 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:23:20.601599 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:23:20.601610 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:23:20.601620 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:23:20.601631 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:23:20.601642 | orchestrator | 2026-04-02 00:23:20.601653 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-04-02 00:23:20.601663 | orchestrator | Thursday 02 April 2026 00:23:18 +0000 (0:00:00.190) 0:00:10.513 ******** 2026-04-02 00:23:20.601674 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:23:20.601685 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:23:20.601696 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:23:20.601707 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:23:20.601717 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:23:20.601728 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:23:20.601739 | orchestrator | 2026-04-02 00:23:20.601750 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-04-02 00:23:20.601761 | orchestrator | Thursday 02 April 2026 00:23:19 +0000 (0:00:00.668) 0:00:11.182 ******** 2026-04-02 00:23:20.601772 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:23:20.601783 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:23:20.601793 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:23:20.601804 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:23:20.601815 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:23:20.601826 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:23:20.601836 | orchestrator | 2026-04-02 00:23:20.601847 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-04-02 00:23:20.601858 | orchestrator | Thursday 02 April 2026 00:23:19 +0000 (0:00:00.162) 0:00:11.344 ******** 2026-04-02 00:23:20.601869 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-02 00:23:20.601880 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-02 00:23:20.601891 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-02 00:23:20.601902 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:23:20.601912 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:23:20.601923 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:23:20.601934 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-04-02 00:23:20.601945 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-02 00:23:20.601956 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-04-02 00:23:20.601966 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:23:20.601977 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:23:20.601988 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:23:20.601999 | orchestrator | 2026-04-02 00:23:20.602010 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-04-02 00:23:20.602144 | orchestrator | Thursday 02 April 2026 00:23:20 +0000 (0:00:00.697) 0:00:12.042 ******** 2026-04-02 00:23:20.602167 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:23:20.602186 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:23:20.602215 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:23:20.602227 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:23:20.602238 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:23:20.602248 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:23:20.602259 | orchestrator | 2026-04-02 00:23:20.602270 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-04-02 00:23:20.602281 | orchestrator | Thursday 02 April 2026 00:23:20 +0000 (0:00:00.130) 0:00:12.173 ******** 2026-04-02 00:23:20.602292 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:23:20.602303 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:23:20.602314 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:23:20.602324 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:23:20.602354 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:23:21.807813 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:23:21.807915 | orchestrator | 2026-04-02 00:23:21.807932 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-04-02 00:23:21.807945 | orchestrator | Thursday 02 April 2026 00:23:20 +0000 (0:00:00.149) 0:00:12.323 ******** 2026-04-02 00:23:21.807956 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:23:21.808032 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:23:21.808045 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:23:21.808056 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:23:21.808067 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:23:21.808125 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:23:21.808137 | orchestrator | 2026-04-02 00:23:21.808148 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-04-02 00:23:21.808159 | orchestrator | Thursday 02 April 2026 00:23:20 +0000 (0:00:00.143) 0:00:12.466 ******** 2026-04-02 00:23:21.808170 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:23:21.808181 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:23:21.808191 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:23:21.808203 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:23:21.808223 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:23:21.808240 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:23:21.808258 | orchestrator | 2026-04-02 00:23:21.808278 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-04-02 00:23:21.808297 | orchestrator | Thursday 02 April 2026 00:23:21 +0000 (0:00:00.655) 0:00:13.122 ******** 2026-04-02 00:23:21.808315 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:23:21.808326 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:23:21.808337 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:23:21.808348 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:23:21.808361 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:23:21.808373 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:23:21.808385 | orchestrator | 2026-04-02 00:23:21.808398 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-02 00:23:21.808440 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-02 00:23:21.808453 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-02 00:23:21.808464 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-02 00:23:21.808475 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-02 00:23:21.808485 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-02 00:23:21.808496 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-02 00:23:21.808507 | orchestrator | 2026-04-02 00:23:21.808518 | orchestrator | 2026-04-02 00:23:21.808529 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-02 00:23:21.808539 | orchestrator | Thursday 02 April 2026 00:23:21 +0000 (0:00:00.210) 0:00:13.332 ******** 2026-04-02 00:23:21.808550 | orchestrator | =============================================================================== 2026-04-02 00:23:21.808561 | orchestrator | Gathering Facts --------------------------------------------------------- 3.41s 2026-04-02 00:23:21.808572 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.33s 2026-04-02 00:23:21.808583 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.29s 2026-04-02 00:23:21.808617 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.23s 2026-04-02 00:23:21.808629 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.91s 2026-04-02 00:23:21.808639 | orchestrator | Do not require tty for all users ---------------------------------------- 0.79s 2026-04-02 00:23:21.808650 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.72s 2026-04-02 00:23:21.808660 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.70s 2026-04-02 00:23:21.808671 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.67s 2026-04-02 00:23:21.808681 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.66s 2026-04-02 00:23:21.808692 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.21s 2026-04-02 00:23:21.808703 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.19s 2026-04-02 00:23:21.808714 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.17s 2026-04-02 00:23:21.808725 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.16s 2026-04-02 00:23:21.808735 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.15s 2026-04-02 00:23:21.808746 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.15s 2026-04-02 00:23:21.808756 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.15s 2026-04-02 00:23:21.808767 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.14s 2026-04-02 00:23:21.808777 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.13s 2026-04-02 00:23:21.978194 | orchestrator | + osism apply --environment custom facts 2026-04-02 00:23:23.184582 | orchestrator | 2026-04-02 00:23:23 | INFO  | Trying to run play facts in environment custom 2026-04-02 00:23:33.368409 | orchestrator | 2026-04-02 00:23:33 | INFO  | Prepare task for execution of facts. 2026-04-02 00:23:33.442318 | orchestrator | 2026-04-02 00:23:33 | INFO  | Task ce02fbf4-dc5c-48c4-b7d3-fab0f6f62df8 (facts) was prepared for execution. 2026-04-02 00:23:33.442508 | orchestrator | 2026-04-02 00:23:33 | INFO  | It takes a moment until task ce02fbf4-dc5c-48c4-b7d3-fab0f6f62df8 (facts) has been started and output is visible here. 2026-04-02 00:24:16.392726 | orchestrator | 2026-04-02 00:24:16.392833 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-04-02 00:24:16.392849 | orchestrator | 2026-04-02 00:24:16.392861 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-04-02 00:24:16.392873 | orchestrator | Thursday 02 April 2026 00:23:36 +0000 (0:00:00.102) 0:00:00.102 ******** 2026-04-02 00:24:16.392884 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:24:16.392896 | orchestrator | ok: [testbed-manager] 2026-04-02 00:24:16.392908 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:24:16.392919 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:24:16.392930 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:24:16.392941 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:24:16.392951 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:24:16.392962 | orchestrator | 2026-04-02 00:24:16.392973 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-04-02 00:24:16.392984 | orchestrator | Thursday 02 April 2026 00:23:37 +0000 (0:00:01.295) 0:00:01.397 ******** 2026-04-02 00:24:16.392995 | orchestrator | ok: [testbed-manager] 2026-04-02 00:24:16.393054 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:24:16.393070 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:24:16.393081 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:24:16.393092 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:24:16.393120 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:24:16.393132 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:24:16.393143 | orchestrator | 2026-04-02 00:24:16.393177 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-04-02 00:24:16.393189 | orchestrator | 2026-04-02 00:24:16.393200 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-04-02 00:24:16.393211 | orchestrator | Thursday 02 April 2026 00:23:38 +0000 (0:00:01.267) 0:00:02.665 ******** 2026-04-02 00:24:16.393222 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:24:16.393232 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:24:16.393243 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:24:16.393254 | orchestrator | 2026-04-02 00:24:16.393265 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-04-02 00:24:16.393277 | orchestrator | Thursday 02 April 2026 00:23:38 +0000 (0:00:00.077) 0:00:02.742 ******** 2026-04-02 00:24:16.393288 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:24:16.393299 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:24:16.393309 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:24:16.393320 | orchestrator | 2026-04-02 00:24:16.393331 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-04-02 00:24:16.393342 | orchestrator | Thursday 02 April 2026 00:23:39 +0000 (0:00:00.169) 0:00:02.912 ******** 2026-04-02 00:24:16.393353 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:24:16.393364 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:24:16.393374 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:24:16.393385 | orchestrator | 2026-04-02 00:24:16.393397 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-04-02 00:24:16.393408 | orchestrator | Thursday 02 April 2026 00:23:39 +0000 (0:00:00.190) 0:00:03.102 ******** 2026-04-02 00:24:16.393420 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-02 00:24:16.393432 | orchestrator | 2026-04-02 00:24:16.393443 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-04-02 00:24:16.393454 | orchestrator | Thursday 02 April 2026 00:23:39 +0000 (0:00:00.119) 0:00:03.222 ******** 2026-04-02 00:24:16.393465 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:24:16.393476 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:24:16.393486 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:24:16.393497 | orchestrator | 2026-04-02 00:24:16.393508 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-04-02 00:24:16.393519 | orchestrator | Thursday 02 April 2026 00:23:39 +0000 (0:00:00.412) 0:00:03.635 ******** 2026-04-02 00:24:16.393530 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:24:16.393541 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:24:16.393552 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:24:16.393562 | orchestrator | 2026-04-02 00:24:16.393573 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-04-02 00:24:16.393584 | orchestrator | Thursday 02 April 2026 00:23:39 +0000 (0:00:00.103) 0:00:03.739 ******** 2026-04-02 00:24:16.393595 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:24:16.393606 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:24:16.393617 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:24:16.393627 | orchestrator | 2026-04-02 00:24:16.393638 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-04-02 00:24:16.393649 | orchestrator | Thursday 02 April 2026 00:23:40 +0000 (0:00:01.007) 0:00:04.746 ******** 2026-04-02 00:24:16.393660 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:24:16.393671 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:24:16.393682 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:24:16.393693 | orchestrator | 2026-04-02 00:24:16.393704 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-04-02 00:24:16.393715 | orchestrator | Thursday 02 April 2026 00:23:41 +0000 (0:00:00.439) 0:00:05.186 ******** 2026-04-02 00:24:16.393726 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:24:16.393737 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:24:16.393748 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:24:16.393759 | orchestrator | 2026-04-02 00:24:16.393777 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-04-02 00:24:16.393788 | orchestrator | Thursday 02 April 2026 00:23:42 +0000 (0:00:01.059) 0:00:06.246 ******** 2026-04-02 00:24:16.393799 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:24:16.393810 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:24:16.393821 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:24:16.393831 | orchestrator | 2026-04-02 00:24:16.393843 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-04-02 00:24:16.393853 | orchestrator | Thursday 02 April 2026 00:23:59 +0000 (0:00:16.726) 0:00:22.972 ******** 2026-04-02 00:24:16.393864 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:24:16.393875 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:24:16.393886 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:24:16.393897 | orchestrator | 2026-04-02 00:24:16.393908 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-04-02 00:24:16.393938 | orchestrator | Thursday 02 April 2026 00:23:59 +0000 (0:00:00.088) 0:00:23.061 ******** 2026-04-02 00:24:16.393950 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:24:16.393961 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:24:16.393971 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:24:16.393982 | orchestrator | 2026-04-02 00:24:16.393993 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-04-02 00:24:16.394004 | orchestrator | Thursday 02 April 2026 00:24:07 +0000 (0:00:08.117) 0:00:31.178 ******** 2026-04-02 00:24:16.394092 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:24:16.394104 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:24:16.394115 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:24:16.394126 | orchestrator | 2026-04-02 00:24:16.394137 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-04-02 00:24:16.394148 | orchestrator | Thursday 02 April 2026 00:24:07 +0000 (0:00:00.443) 0:00:31.621 ******** 2026-04-02 00:24:16.394159 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-04-02 00:24:16.394171 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-04-02 00:24:16.394182 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-04-02 00:24:16.394193 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-04-02 00:24:16.394204 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-04-02 00:24:16.394215 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-04-02 00:24:16.394226 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-04-02 00:24:16.394237 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-04-02 00:24:16.394248 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-04-02 00:24:16.394259 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-04-02 00:24:16.394269 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-04-02 00:24:16.394280 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-04-02 00:24:16.394291 | orchestrator | 2026-04-02 00:24:16.394302 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-04-02 00:24:16.394313 | orchestrator | Thursday 02 April 2026 00:24:11 +0000 (0:00:03.474) 0:00:35.095 ******** 2026-04-02 00:24:16.394324 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:24:16.394335 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:24:16.394345 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:24:16.394356 | orchestrator | 2026-04-02 00:24:16.394367 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-02 00:24:16.394378 | orchestrator | 2026-04-02 00:24:16.394389 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-02 00:24:16.394440 | orchestrator | Thursday 02 April 2026 00:24:12 +0000 (0:00:01.431) 0:00:36.527 ******** 2026-04-02 00:24:16.394452 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:24:16.394481 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:24:16.394492 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:24:16.394503 | orchestrator | ok: [testbed-manager] 2026-04-02 00:24:16.394514 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:24:16.394524 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:24:16.394535 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:24:16.394546 | orchestrator | 2026-04-02 00:24:16.394556 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-02 00:24:16.394568 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-02 00:24:16.394580 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-02 00:24:16.394592 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-02 00:24:16.394607 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-02 00:24:16.394626 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-02 00:24:16.394646 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-02 00:24:16.394663 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-02 00:24:16.394681 | orchestrator | 2026-04-02 00:24:16.394699 | orchestrator | 2026-04-02 00:24:16.394717 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-02 00:24:16.394735 | orchestrator | Thursday 02 April 2026 00:24:16 +0000 (0:00:03.616) 0:00:40.143 ******** 2026-04-02 00:24:16.394753 | orchestrator | =============================================================================== 2026-04-02 00:24:16.394773 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.73s 2026-04-02 00:24:16.394791 | orchestrator | Install required packages (Debian) -------------------------------------- 8.12s 2026-04-02 00:24:16.394837 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.62s 2026-04-02 00:24:16.394850 | orchestrator | Copy fact files --------------------------------------------------------- 3.47s 2026-04-02 00:24:16.394861 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.43s 2026-04-02 00:24:16.394872 | orchestrator | Create custom facts directory ------------------------------------------- 1.30s 2026-04-02 00:24:16.394893 | orchestrator | Copy fact file ---------------------------------------------------------- 1.27s 2026-04-02 00:24:16.568205 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.06s 2026-04-02 00:24:16.568296 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.01s 2026-04-02 00:24:16.568306 | orchestrator | Create custom facts directory ------------------------------------------- 0.44s 2026-04-02 00:24:16.568313 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.44s 2026-04-02 00:24:16.568320 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.41s 2026-04-02 00:24:16.568327 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.19s 2026-04-02 00:24:16.568333 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.17s 2026-04-02 00:24:16.568340 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.12s 2026-04-02 00:24:16.568347 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.10s 2026-04-02 00:24:16.568369 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.09s 2026-04-02 00:24:16.568376 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.08s 2026-04-02 00:24:16.737726 | orchestrator | + osism apply bootstrap 2026-04-02 00:24:28.080702 | orchestrator | 2026-04-02 00:24:28 | INFO  | Prepare task for execution of bootstrap. 2026-04-02 00:24:28.224234 | orchestrator | 2026-04-02 00:24:28 | INFO  | Task 25e18ce0-8d79-42de-9176-148e12ba4f3d (bootstrap) was prepared for execution. 2026-04-02 00:24:28.224315 | orchestrator | 2026-04-02 00:24:28 | INFO  | It takes a moment until task 25e18ce0-8d79-42de-9176-148e12ba4f3d (bootstrap) has been started and output is visible here. 2026-04-02 00:24:43.592729 | orchestrator | 2026-04-02 00:24:43.592860 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-04-02 00:24:43.592887 | orchestrator | 2026-04-02 00:24:43.592909 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-04-02 00:24:43.592929 | orchestrator | Thursday 02 April 2026 00:24:31 +0000 (0:00:00.186) 0:00:00.187 ******** 2026-04-02 00:24:43.592949 | orchestrator | ok: [testbed-manager] 2026-04-02 00:24:43.592970 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:24:43.593075 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:24:43.593095 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:24:43.593112 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:24:43.593132 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:24:43.593150 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:24:43.593167 | orchestrator | 2026-04-02 00:24:43.593185 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-02 00:24:43.593202 | orchestrator | 2026-04-02 00:24:43.593220 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-02 00:24:43.593238 | orchestrator | Thursday 02 April 2026 00:24:31 +0000 (0:00:00.298) 0:00:00.485 ******** 2026-04-02 00:24:43.593258 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:24:43.593279 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:24:43.593297 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:24:43.593317 | orchestrator | ok: [testbed-manager] 2026-04-02 00:24:43.593337 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:24:43.593356 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:24:43.593376 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:24:43.593389 | orchestrator | 2026-04-02 00:24:43.593402 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-04-02 00:24:43.593415 | orchestrator | 2026-04-02 00:24:43.593428 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-02 00:24:43.593441 | orchestrator | Thursday 02 April 2026 00:24:36 +0000 (0:00:04.707) 0:00:05.192 ******** 2026-04-02 00:24:43.593454 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-04-02 00:24:43.593468 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-04-02 00:24:43.593480 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-04-02 00:24:43.593494 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-02 00:24:43.593506 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-04-02 00:24:43.593519 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-02 00:24:43.593531 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-04-02 00:24:43.593545 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-02 00:24:43.593558 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-04-02 00:24:43.593571 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-02 00:24:43.593583 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-04-02 00:24:43.593596 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-04-02 00:24:43.593609 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-02 00:24:43.593622 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-04-02 00:24:43.593633 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-02 00:24:43.593644 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-02 00:24:43.593682 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-04-02 00:24:43.593694 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-04-02 00:24:43.593704 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-02 00:24:43.593715 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-02 00:24:43.593726 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-02 00:24:43.593736 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-02 00:24:43.593747 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-02 00:24:43.593757 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:24:43.593768 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-02 00:24:43.593779 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-04-02 00:24:43.593789 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:24:43.593800 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-02 00:24:43.593811 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-04-02 00:24:43.593822 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-02 00:24:43.593832 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-02 00:24:43.593843 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-02 00:24:43.593854 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-02 00:24:43.593868 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-02 00:24:43.593886 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-02 00:24:43.593903 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-04-02 00:24:43.593920 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-02 00:24:43.593939 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-04-02 00:24:43.593959 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-02 00:24:43.594006 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-02 00:24:43.594080 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-02 00:24:43.594092 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-02 00:24:43.594103 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-04-02 00:24:43.594114 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-02 00:24:43.594125 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:24:43.594136 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-04-02 00:24:43.594168 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-02 00:24:43.594180 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:24:43.594191 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-02 00:24:43.594201 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-04-02 00:24:43.594212 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:24:43.594223 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-04-02 00:24:43.594234 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:24:43.594245 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-02 00:24:43.594256 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-02 00:24:43.594267 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:24:43.594278 | orchestrator | 2026-04-02 00:24:43.594289 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-04-02 00:24:43.594300 | orchestrator | 2026-04-02 00:24:43.594311 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-04-02 00:24:43.594322 | orchestrator | Thursday 02 April 2026 00:24:36 +0000 (0:00:00.368) 0:00:05.561 ******** 2026-04-02 00:24:43.594333 | orchestrator | ok: [testbed-manager] 2026-04-02 00:24:43.594344 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:24:43.594366 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:24:43.594377 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:24:43.594388 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:24:43.594399 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:24:43.594410 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:24:43.594421 | orchestrator | 2026-04-02 00:24:43.594432 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-04-02 00:24:43.594443 | orchestrator | Thursday 02 April 2026 00:24:38 +0000 (0:00:01.207) 0:00:06.768 ******** 2026-04-02 00:24:43.594454 | orchestrator | ok: [testbed-manager] 2026-04-02 00:24:43.594465 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:24:43.594476 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:24:43.594486 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:24:43.594497 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:24:43.594508 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:24:43.594519 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:24:43.594530 | orchestrator | 2026-04-02 00:24:43.594541 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-04-02 00:24:43.594552 | orchestrator | Thursday 02 April 2026 00:24:39 +0000 (0:00:01.232) 0:00:08.000 ******** 2026-04-02 00:24:43.594564 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-02 00:24:43.594579 | orchestrator | 2026-04-02 00:24:43.594590 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-04-02 00:24:43.594601 | orchestrator | Thursday 02 April 2026 00:24:39 +0000 (0:00:00.266) 0:00:08.266 ******** 2026-04-02 00:24:43.594612 | orchestrator | changed: [testbed-manager] 2026-04-02 00:24:43.594623 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:24:43.594634 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:24:43.594645 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:24:43.594656 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:24:43.594667 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:24:43.594678 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:24:43.594689 | orchestrator | 2026-04-02 00:24:43.594700 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-04-02 00:24:43.594711 | orchestrator | Thursday 02 April 2026 00:24:41 +0000 (0:00:01.505) 0:00:09.771 ******** 2026-04-02 00:24:43.594722 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:24:43.594734 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-02 00:24:43.594748 | orchestrator | 2026-04-02 00:24:43.594759 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-04-02 00:24:43.594787 | orchestrator | Thursday 02 April 2026 00:24:41 +0000 (0:00:00.266) 0:00:10.038 ******** 2026-04-02 00:24:43.594799 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:24:43.594810 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:24:43.594820 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:24:43.594831 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:24:43.594842 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:24:43.594853 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:24:43.594864 | orchestrator | 2026-04-02 00:24:43.594875 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-04-02 00:24:43.594886 | orchestrator | Thursday 02 April 2026 00:24:42 +0000 (0:00:01.098) 0:00:11.137 ******** 2026-04-02 00:24:43.594897 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:24:43.594908 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:24:43.594919 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:24:43.594930 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:24:43.594941 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:24:43.594951 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:24:43.594969 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:24:43.595007 | orchestrator | 2026-04-02 00:24:43.595019 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-04-02 00:24:43.595035 | orchestrator | Thursday 02 April 2026 00:24:43 +0000 (0:00:00.584) 0:00:11.722 ******** 2026-04-02 00:24:43.595047 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:24:43.595058 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:24:43.595068 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:24:43.595079 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:24:43.595090 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:24:43.595101 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:24:43.595111 | orchestrator | ok: [testbed-manager] 2026-04-02 00:24:43.595122 | orchestrator | 2026-04-02 00:24:43.595133 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-04-02 00:24:43.595146 | orchestrator | Thursday 02 April 2026 00:24:43 +0000 (0:00:00.447) 0:00:12.169 ******** 2026-04-02 00:24:43.595157 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:24:43.595168 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:24:43.595185 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:24:55.323100 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:24:55.323225 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:24:55.323248 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:24:55.323265 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:24:55.323284 | orchestrator | 2026-04-02 00:24:55.323296 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-04-02 00:24:55.323308 | orchestrator | Thursday 02 April 2026 00:24:43 +0000 (0:00:00.221) 0:00:12.390 ******** 2026-04-02 00:24:55.323320 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-02 00:24:55.323343 | orchestrator | 2026-04-02 00:24:55.323354 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-04-02 00:24:55.323365 | orchestrator | Thursday 02 April 2026 00:24:43 +0000 (0:00:00.275) 0:00:12.666 ******** 2026-04-02 00:24:55.323375 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-02 00:24:55.323385 | orchestrator | 2026-04-02 00:24:55.323395 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-04-02 00:24:55.323404 | orchestrator | Thursday 02 April 2026 00:24:44 +0000 (0:00:00.307) 0:00:12.974 ******** 2026-04-02 00:24:55.323414 | orchestrator | ok: [testbed-manager] 2026-04-02 00:24:55.323425 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:24:55.323435 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:24:55.323445 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:24:55.323455 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:24:55.323464 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:24:55.323474 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:24:55.323484 | orchestrator | 2026-04-02 00:24:55.323494 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-04-02 00:24:55.323504 | orchestrator | Thursday 02 April 2026 00:24:45 +0000 (0:00:01.319) 0:00:14.293 ******** 2026-04-02 00:24:55.323515 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:24:55.323524 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:24:55.323534 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:24:55.323544 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:24:55.323553 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:24:55.323563 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:24:55.323573 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:24:55.323583 | orchestrator | 2026-04-02 00:24:55.323592 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-04-02 00:24:55.323623 | orchestrator | Thursday 02 April 2026 00:24:45 +0000 (0:00:00.199) 0:00:14.493 ******** 2026-04-02 00:24:55.323633 | orchestrator | ok: [testbed-manager] 2026-04-02 00:24:55.323643 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:24:55.323652 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:24:55.323666 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:24:55.323683 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:24:55.323698 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:24:55.323722 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:24:55.323739 | orchestrator | 2026-04-02 00:24:55.323755 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-04-02 00:24:55.323771 | orchestrator | Thursday 02 April 2026 00:24:46 +0000 (0:00:00.538) 0:00:15.032 ******** 2026-04-02 00:24:55.323787 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:24:55.323801 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:24:55.323821 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:24:55.323841 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:24:55.323856 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:24:55.323871 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:24:55.323885 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:24:55.323900 | orchestrator | 2026-04-02 00:24:55.323917 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-04-02 00:24:55.323935 | orchestrator | Thursday 02 April 2026 00:24:46 +0000 (0:00:00.249) 0:00:15.282 ******** 2026-04-02 00:24:55.323951 | orchestrator | ok: [testbed-manager] 2026-04-02 00:24:55.324043 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:24:55.324056 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:24:55.324066 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:24:55.324075 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:24:55.324088 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:24:55.324101 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:24:55.324111 | orchestrator | 2026-04-02 00:24:55.324120 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-04-02 00:24:55.324130 | orchestrator | Thursday 02 April 2026 00:24:47 +0000 (0:00:00.544) 0:00:15.827 ******** 2026-04-02 00:24:55.324140 | orchestrator | ok: [testbed-manager] 2026-04-02 00:24:55.324149 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:24:55.324159 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:24:55.324168 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:24:55.324178 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:24:55.324187 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:24:55.324197 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:24:55.324206 | orchestrator | 2026-04-02 00:24:55.324226 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-04-02 00:24:55.324236 | orchestrator | Thursday 02 April 2026 00:24:48 +0000 (0:00:01.128) 0:00:16.955 ******** 2026-04-02 00:24:55.324245 | orchestrator | ok: [testbed-manager] 2026-04-02 00:24:55.324255 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:24:55.324265 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:24:55.324275 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:24:55.324284 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:24:55.324294 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:24:55.324303 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:24:55.324312 | orchestrator | 2026-04-02 00:24:55.324322 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-04-02 00:24:55.324332 | orchestrator | Thursday 02 April 2026 00:24:49 +0000 (0:00:01.017) 0:00:17.972 ******** 2026-04-02 00:24:55.324364 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-02 00:24:55.324374 | orchestrator | 2026-04-02 00:24:55.324384 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-04-02 00:24:55.324394 | orchestrator | Thursday 02 April 2026 00:24:49 +0000 (0:00:00.328) 0:00:18.301 ******** 2026-04-02 00:24:55.324415 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:24:55.324424 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:24:55.324434 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:24:55.324444 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:24:55.324453 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:24:55.324463 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:24:55.324472 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:24:55.324482 | orchestrator | 2026-04-02 00:24:55.324491 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-04-02 00:24:55.324501 | orchestrator | Thursday 02 April 2026 00:24:50 +0000 (0:00:01.263) 0:00:19.564 ******** 2026-04-02 00:24:55.324511 | orchestrator | ok: [testbed-manager] 2026-04-02 00:24:55.324520 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:24:55.324530 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:24:55.324540 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:24:55.324549 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:24:55.324559 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:24:55.324568 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:24:55.324578 | orchestrator | 2026-04-02 00:24:55.324587 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-04-02 00:24:55.324597 | orchestrator | Thursday 02 April 2026 00:24:51 +0000 (0:00:00.214) 0:00:19.779 ******** 2026-04-02 00:24:55.324607 | orchestrator | ok: [testbed-manager] 2026-04-02 00:24:55.324616 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:24:55.324626 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:24:55.324635 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:24:55.324650 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:24:55.324673 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:24:55.324692 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:24:55.324707 | orchestrator | 2026-04-02 00:24:55.324722 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-04-02 00:24:55.324738 | orchestrator | Thursday 02 April 2026 00:24:51 +0000 (0:00:00.227) 0:00:20.007 ******** 2026-04-02 00:24:55.324755 | orchestrator | ok: [testbed-manager] 2026-04-02 00:24:55.324772 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:24:55.324788 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:24:55.324801 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:24:55.324811 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:24:55.324820 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:24:55.324830 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:24:55.324839 | orchestrator | 2026-04-02 00:24:55.324849 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-04-02 00:24:55.324859 | orchestrator | Thursday 02 April 2026 00:24:51 +0000 (0:00:00.238) 0:00:20.246 ******** 2026-04-02 00:24:55.324869 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-02 00:24:55.324881 | orchestrator | 2026-04-02 00:24:55.324890 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-04-02 00:24:55.324900 | orchestrator | Thursday 02 April 2026 00:24:51 +0000 (0:00:00.272) 0:00:20.518 ******** 2026-04-02 00:24:55.324910 | orchestrator | ok: [testbed-manager] 2026-04-02 00:24:55.324919 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:24:55.324929 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:24:55.324938 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:24:55.324948 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:24:55.324957 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:24:55.324998 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:24:55.325008 | orchestrator | 2026-04-02 00:24:55.325018 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-04-02 00:24:55.325028 | orchestrator | Thursday 02 April 2026 00:24:52 +0000 (0:00:00.538) 0:00:21.056 ******** 2026-04-02 00:24:55.325037 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:24:55.325057 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:24:55.325066 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:24:55.325076 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:24:55.325086 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:24:55.325095 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:24:55.325105 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:24:55.325115 | orchestrator | 2026-04-02 00:24:55.325124 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-04-02 00:24:55.325134 | orchestrator | Thursday 02 April 2026 00:24:52 +0000 (0:00:00.203) 0:00:21.259 ******** 2026-04-02 00:24:55.325143 | orchestrator | ok: [testbed-manager] 2026-04-02 00:24:55.325153 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:24:55.325163 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:24:55.325172 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:24:55.325182 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:24:55.325192 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:24:55.325201 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:24:55.325211 | orchestrator | 2026-04-02 00:24:55.325221 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-04-02 00:24:55.325231 | orchestrator | Thursday 02 April 2026 00:24:53 +0000 (0:00:01.081) 0:00:22.341 ******** 2026-04-02 00:24:55.325241 | orchestrator | ok: [testbed-manager] 2026-04-02 00:24:55.325250 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:24:55.325260 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:24:55.325270 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:24:55.325279 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:24:55.325289 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:24:55.325298 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:24:55.325308 | orchestrator | 2026-04-02 00:24:55.325317 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-04-02 00:24:55.325327 | orchestrator | Thursday 02 April 2026 00:24:54 +0000 (0:00:00.668) 0:00:23.009 ******** 2026-04-02 00:24:55.325337 | orchestrator | ok: [testbed-manager] 2026-04-02 00:24:55.325347 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:24:55.325356 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:24:55.325366 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:24:55.325384 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:25:35.417403 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:25:35.417485 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:25:35.417499 | orchestrator | 2026-04-02 00:25:35.417512 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-04-02 00:25:35.417524 | orchestrator | Thursday 02 April 2026 00:24:55 +0000 (0:00:01.244) 0:00:24.254 ******** 2026-04-02 00:25:35.417535 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:25:35.417546 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:25:35.417557 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:25:35.417568 | orchestrator | changed: [testbed-manager] 2026-04-02 00:25:35.417578 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:25:35.417589 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:25:35.417600 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:25:35.417611 | orchestrator | 2026-04-02 00:25:35.417622 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-04-02 00:25:35.417633 | orchestrator | Thursday 02 April 2026 00:25:11 +0000 (0:00:16.030) 0:00:40.284 ******** 2026-04-02 00:25:35.417644 | orchestrator | ok: [testbed-manager] 2026-04-02 00:25:35.417655 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:25:35.417666 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:25:35.417677 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:25:35.417687 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:25:35.417698 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:25:35.417709 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:25:35.417719 | orchestrator | 2026-04-02 00:25:35.417730 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-04-02 00:25:35.417741 | orchestrator | Thursday 02 April 2026 00:25:11 +0000 (0:00:00.221) 0:00:40.506 ******** 2026-04-02 00:25:35.417752 | orchestrator | ok: [testbed-manager] 2026-04-02 00:25:35.417781 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:25:35.417792 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:25:35.417803 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:25:35.417813 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:25:35.417824 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:25:35.417834 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:25:35.417845 | orchestrator | 2026-04-02 00:25:35.417856 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-04-02 00:25:35.417867 | orchestrator | Thursday 02 April 2026 00:25:12 +0000 (0:00:00.197) 0:00:40.704 ******** 2026-04-02 00:25:35.417878 | orchestrator | ok: [testbed-manager] 2026-04-02 00:25:35.417888 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:25:35.417899 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:25:35.417909 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:25:35.417961 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:25:35.417976 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:25:35.417989 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:25:35.418001 | orchestrator | 2026-04-02 00:25:35.418071 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-04-02 00:25:35.418094 | orchestrator | Thursday 02 April 2026 00:25:12 +0000 (0:00:00.186) 0:00:40.890 ******** 2026-04-02 00:25:35.418113 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-02 00:25:35.418133 | orchestrator | 2026-04-02 00:25:35.418170 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-04-02 00:25:35.418189 | orchestrator | Thursday 02 April 2026 00:25:12 +0000 (0:00:00.279) 0:00:41.170 ******** 2026-04-02 00:25:35.418207 | orchestrator | ok: [testbed-manager] 2026-04-02 00:25:35.418226 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:25:35.418245 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:25:35.418259 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:25:35.418272 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:25:35.418284 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:25:35.418297 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:25:35.418309 | orchestrator | 2026-04-02 00:25:35.418322 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-04-02 00:25:35.418335 | orchestrator | Thursday 02 April 2026 00:25:14 +0000 (0:00:01.764) 0:00:42.935 ******** 2026-04-02 00:25:35.418346 | orchestrator | changed: [testbed-manager] 2026-04-02 00:25:35.418357 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:25:35.418368 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:25:35.418379 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:25:35.418389 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:25:35.418400 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:25:35.418410 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:25:35.418421 | orchestrator | 2026-04-02 00:25:35.418432 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-04-02 00:25:35.418442 | orchestrator | Thursday 02 April 2026 00:25:15 +0000 (0:00:01.081) 0:00:44.016 ******** 2026-04-02 00:25:35.418453 | orchestrator | ok: [testbed-manager] 2026-04-02 00:25:35.418464 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:25:35.418475 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:25:35.418485 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:25:35.418496 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:25:35.418507 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:25:35.418517 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:25:35.418528 | orchestrator | 2026-04-02 00:25:35.418538 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-04-02 00:25:35.418549 | orchestrator | Thursday 02 April 2026 00:25:16 +0000 (0:00:00.798) 0:00:44.815 ******** 2026-04-02 00:25:35.418566 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-02 00:25:35.418590 | orchestrator | 2026-04-02 00:25:35.418601 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-04-02 00:25:35.418613 | orchestrator | Thursday 02 April 2026 00:25:16 +0000 (0:00:00.284) 0:00:45.099 ******** 2026-04-02 00:25:35.418624 | orchestrator | changed: [testbed-manager] 2026-04-02 00:25:35.418635 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:25:35.418646 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:25:35.418657 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:25:35.418667 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:25:35.418678 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:25:35.418689 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:25:35.418700 | orchestrator | 2026-04-02 00:25:35.418725 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-04-02 00:25:35.418737 | orchestrator | Thursday 02 April 2026 00:25:17 +0000 (0:00:01.060) 0:00:46.159 ******** 2026-04-02 00:25:35.418748 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:25:35.418759 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:25:35.418770 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:25:35.418780 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:25:35.418791 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:25:35.418802 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:25:35.418813 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:25:35.418824 | orchestrator | 2026-04-02 00:25:35.418835 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-04-02 00:25:35.418846 | orchestrator | Thursday 02 April 2026 00:25:17 +0000 (0:00:00.215) 0:00:46.375 ******** 2026-04-02 00:25:35.418857 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-02 00:25:35.418868 | orchestrator | 2026-04-02 00:25:35.418879 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-04-02 00:25:35.418890 | orchestrator | Thursday 02 April 2026 00:25:17 +0000 (0:00:00.274) 0:00:46.650 ******** 2026-04-02 00:25:35.418900 | orchestrator | ok: [testbed-manager] 2026-04-02 00:25:35.418911 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:25:35.418946 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:25:35.418958 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:25:35.418968 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:25:35.418979 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:25:35.418990 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:25:35.419000 | orchestrator | 2026-04-02 00:25:35.419011 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-04-02 00:25:35.419022 | orchestrator | Thursday 02 April 2026 00:25:19 +0000 (0:00:01.935) 0:00:48.585 ******** 2026-04-02 00:25:35.419033 | orchestrator | changed: [testbed-manager] 2026-04-02 00:25:35.419044 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:25:35.419055 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:25:35.419065 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:25:35.419076 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:25:35.419087 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:25:35.419097 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:25:35.419108 | orchestrator | 2026-04-02 00:25:35.419119 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-04-02 00:25:35.419130 | orchestrator | Thursday 02 April 2026 00:25:21 +0000 (0:00:01.152) 0:00:49.738 ******** 2026-04-02 00:25:35.419141 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:25:35.419151 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:25:35.419162 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:25:35.419173 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:25:35.419184 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:25:35.419195 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:25:35.419213 | orchestrator | changed: [testbed-manager] 2026-04-02 00:25:35.419224 | orchestrator | 2026-04-02 00:25:35.419235 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-04-02 00:25:35.419246 | orchestrator | Thursday 02 April 2026 00:25:32 +0000 (0:00:11.637) 0:01:01.376 ******** 2026-04-02 00:25:35.419257 | orchestrator | ok: [testbed-manager] 2026-04-02 00:25:35.419268 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:25:35.419278 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:25:35.419289 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:25:35.419300 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:25:35.419311 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:25:35.419322 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:25:35.419332 | orchestrator | 2026-04-02 00:25:35.419343 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-04-02 00:25:35.419354 | orchestrator | Thursday 02 April 2026 00:25:33 +0000 (0:00:00.999) 0:01:02.375 ******** 2026-04-02 00:25:35.419365 | orchestrator | ok: [testbed-manager] 2026-04-02 00:25:35.419376 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:25:35.419387 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:25:35.419397 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:25:35.419408 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:25:35.419419 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:25:35.419429 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:25:35.419440 | orchestrator | 2026-04-02 00:25:35.419451 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-04-02 00:25:35.419462 | orchestrator | Thursday 02 April 2026 00:25:34 +0000 (0:00:00.896) 0:01:03.272 ******** 2026-04-02 00:25:35.419473 | orchestrator | ok: [testbed-manager] 2026-04-02 00:25:35.419484 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:25:35.419494 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:25:35.419505 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:25:35.419516 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:25:35.419526 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:25:35.419537 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:25:35.419548 | orchestrator | 2026-04-02 00:25:35.419559 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-04-02 00:25:35.419570 | orchestrator | Thursday 02 April 2026 00:25:34 +0000 (0:00:00.248) 0:01:03.520 ******** 2026-04-02 00:25:35.419581 | orchestrator | ok: [testbed-manager] 2026-04-02 00:25:35.419591 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:25:35.419602 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:25:35.419618 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:25:35.419628 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:25:35.419639 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:25:35.419650 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:25:35.419660 | orchestrator | 2026-04-02 00:25:35.419671 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-04-02 00:25:35.419682 | orchestrator | Thursday 02 April 2026 00:25:35 +0000 (0:00:00.251) 0:01:03.771 ******** 2026-04-02 00:25:35.419694 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-02 00:25:35.419705 | orchestrator | 2026-04-02 00:25:35.419721 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-04-02 00:28:18.442216 | orchestrator | Thursday 02 April 2026 00:25:35 +0000 (0:00:00.328) 0:01:04.100 ******** 2026-04-02 00:28:18.442307 | orchestrator | ok: [testbed-manager] 2026-04-02 00:28:18.442315 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:28:18.442321 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:28:18.442325 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:28:18.442330 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:28:18.442334 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:28:18.442339 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:28:18.442343 | orchestrator | 2026-04-02 00:28:18.442348 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-04-02 00:28:18.442379 | orchestrator | Thursday 02 April 2026 00:25:37 +0000 (0:00:02.010) 0:01:06.111 ******** 2026-04-02 00:28:18.442387 | orchestrator | changed: [testbed-manager] 2026-04-02 00:28:18.442394 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:28:18.442400 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:28:18.442407 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:28:18.442413 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:28:18.442420 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:28:18.442428 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:28:18.442434 | orchestrator | 2026-04-02 00:28:18.442441 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-04-02 00:28:18.442450 | orchestrator | Thursday 02 April 2026 00:25:38 +0000 (0:00:00.594) 0:01:06.705 ******** 2026-04-02 00:28:18.442454 | orchestrator | ok: [testbed-manager] 2026-04-02 00:28:18.442458 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:28:18.442463 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:28:18.442467 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:28:18.442471 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:28:18.442475 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:28:18.442479 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:28:18.442483 | orchestrator | 2026-04-02 00:28:18.442488 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-04-02 00:28:18.442492 | orchestrator | Thursday 02 April 2026 00:25:38 +0000 (0:00:00.262) 0:01:06.968 ******** 2026-04-02 00:28:18.442496 | orchestrator | ok: [testbed-manager] 2026-04-02 00:28:18.442501 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:28:18.442505 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:28:18.442509 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:28:18.442513 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:28:18.442517 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:28:18.442537 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:28:18.442541 | orchestrator | 2026-04-02 00:28:18.442545 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-04-02 00:28:18.442550 | orchestrator | Thursday 02 April 2026 00:25:39 +0000 (0:00:01.383) 0:01:08.352 ******** 2026-04-02 00:28:18.442554 | orchestrator | changed: [testbed-manager] 2026-04-02 00:28:18.442558 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:28:18.442562 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:28:18.442566 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:28:18.442570 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:28:18.442574 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:28:18.442578 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:28:18.442582 | orchestrator | 2026-04-02 00:28:18.442587 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-04-02 00:28:18.442591 | orchestrator | Thursday 02 April 2026 00:25:41 +0000 (0:00:01.845) 0:01:10.197 ******** 2026-04-02 00:28:18.442595 | orchestrator | ok: [testbed-manager] 2026-04-02 00:28:18.442599 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:28:18.442603 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:28:18.442607 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:28:18.442612 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:28:18.442616 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:28:18.442620 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:28:18.442624 | orchestrator | 2026-04-02 00:28:18.442638 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-04-02 00:28:18.442642 | orchestrator | Thursday 02 April 2026 00:25:44 +0000 (0:00:02.723) 0:01:12.920 ******** 2026-04-02 00:28:18.442647 | orchestrator | ok: [testbed-manager] 2026-04-02 00:28:18.442651 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:28:18.442655 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:28:18.442665 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:28:18.442670 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:28:18.442674 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:28:18.442678 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:28:18.442684 | orchestrator | 2026-04-02 00:28:18.442691 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-04-02 00:28:18.442704 | orchestrator | Thursday 02 April 2026 00:26:48 +0000 (0:01:04.569) 0:02:17.490 ******** 2026-04-02 00:28:18.442711 | orchestrator | changed: [testbed-manager] 2026-04-02 00:28:18.442718 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:28:18.442725 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:28:18.442731 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:28:18.442738 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:28:18.442744 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:28:18.442751 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:28:18.442758 | orchestrator | 2026-04-02 00:28:18.442765 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-04-02 00:28:18.442790 | orchestrator | Thursday 02 April 2026 00:28:04 +0000 (0:01:15.409) 0:03:32.900 ******** 2026-04-02 00:28:18.442796 | orchestrator | ok: [testbed-manager] 2026-04-02 00:28:18.442803 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:28:18.442811 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:28:18.442816 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:28:18.442820 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:28:18.442824 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:28:18.442829 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:28:18.442833 | orchestrator | 2026-04-02 00:28:18.442837 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-04-02 00:28:18.442842 | orchestrator | Thursday 02 April 2026 00:28:06 +0000 (0:00:01.895) 0:03:34.795 ******** 2026-04-02 00:28:18.442846 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:28:18.442850 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:28:18.442854 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:28:18.442858 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:28:18.442863 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:28:18.442867 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:28:18.442871 | orchestrator | changed: [testbed-manager] 2026-04-02 00:28:18.442875 | orchestrator | 2026-04-02 00:28:18.442879 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-04-02 00:28:18.442883 | orchestrator | Thursday 02 April 2026 00:28:17 +0000 (0:00:11.223) 0:03:46.019 ******** 2026-04-02 00:28:18.442913 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-04-02 00:28:18.442927 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-04-02 00:28:18.442933 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-04-02 00:28:18.442939 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-04-02 00:28:18.442949 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-04-02 00:28:18.442953 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-04-02 00:28:18.442960 | orchestrator | 2026-04-02 00:28:18.442965 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-04-02 00:28:18.442969 | orchestrator | Thursday 02 April 2026 00:28:17 +0000 (0:00:00.394) 0:03:46.413 ******** 2026-04-02 00:28:18.442973 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-02 00:28:18.442977 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:28:18.442982 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-02 00:28:18.442986 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:28:18.442990 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-02 00:28:18.442994 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:28:18.443005 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-02 00:28:18.443009 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:28:18.443014 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-02 00:28:18.443018 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-02 00:28:18.443022 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-02 00:28:18.443026 | orchestrator | 2026-04-02 00:28:18.443030 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-04-02 00:28:18.443038 | orchestrator | Thursday 02 April 2026 00:28:18 +0000 (0:00:00.639) 0:03:47.053 ******** 2026-04-02 00:28:18.443042 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-02 00:28:18.443047 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-02 00:28:18.443059 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-02 00:28:18.443063 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-02 00:28:18.443067 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-02 00:28:18.443074 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-02 00:28:26.260327 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-02 00:28:26.260424 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-02 00:28:26.260437 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-02 00:28:26.260444 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-02 00:28:26.260453 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:28:26.260462 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-02 00:28:26.260469 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-02 00:28:26.260476 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-02 00:28:26.260506 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-02 00:28:26.260514 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-02 00:28:26.260521 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-02 00:28:26.260528 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-02 00:28:26.260534 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-02 00:28:26.260541 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-02 00:28:26.260548 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-02 00:28:26.260554 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-02 00:28:26.260561 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-02 00:28:26.260568 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:28:26.260575 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-02 00:28:26.260581 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-02 00:28:26.260588 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-02 00:28:26.260594 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-02 00:28:26.260601 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-02 00:28:26.260606 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-02 00:28:26.260612 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-02 00:28:26.260618 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-02 00:28:26.260625 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-02 00:28:26.260631 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-02 00:28:26.260637 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-02 00:28:26.260644 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-02 00:28:26.260650 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-02 00:28:26.260656 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-02 00:28:26.260662 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:28:26.260668 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-02 00:28:26.260673 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-02 00:28:26.260680 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-02 00:28:26.260698 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-02 00:28:26.260705 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:28:26.260711 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-04-02 00:28:26.260718 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-04-02 00:28:26.260724 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-04-02 00:28:26.260737 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-04-02 00:28:26.260743 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-04-02 00:28:26.260882 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-04-02 00:28:26.260892 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-04-02 00:28:26.260899 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-04-02 00:28:26.260906 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-04-02 00:28:26.260912 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-04-02 00:28:26.260919 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-04-02 00:28:26.260927 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-04-02 00:28:26.260934 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-04-02 00:28:26.260941 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-04-02 00:28:26.260948 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-04-02 00:28:26.260954 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-04-02 00:28:26.260961 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-04-02 00:28:26.260968 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-04-02 00:28:26.260975 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-04-02 00:28:26.260982 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-04-02 00:28:26.260988 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-04-02 00:28:26.260994 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-04-02 00:28:26.261000 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-04-02 00:28:26.261006 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-04-02 00:28:26.261013 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-04-02 00:28:26.261019 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-04-02 00:28:26.261025 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-04-02 00:28:26.261032 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-04-02 00:28:26.261038 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-04-02 00:28:26.261045 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-04-02 00:28:26.261052 | orchestrator | 2026-04-02 00:28:26.261078 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-04-02 00:28:26.261086 | orchestrator | Thursday 02 April 2026 00:28:24 +0000 (0:00:05.746) 0:03:52.799 ******** 2026-04-02 00:28:26.261092 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-02 00:28:26.261118 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-02 00:28:26.261136 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-02 00:28:26.261144 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-02 00:28:26.261191 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-02 00:28:26.261201 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-02 00:28:26.261208 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-02 00:28:26.261214 | orchestrator | 2026-04-02 00:28:26.261222 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-04-02 00:28:26.261229 | orchestrator | Thursday 02 April 2026 00:28:25 +0000 (0:00:01.569) 0:03:54.369 ******** 2026-04-02 00:28:26.261236 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-02 00:28:26.261251 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-02 00:28:26.261258 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:28:26.261265 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-02 00:28:26.261272 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:28:26.261278 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-02 00:28:26.261285 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:28:26.261292 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:28:26.261299 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-02 00:28:26.261306 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-02 00:28:26.261323 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-02 00:28:39.569825 | orchestrator | 2026-04-02 00:28:39.569923 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-04-02 00:28:39.569937 | orchestrator | Thursday 02 April 2026 00:28:26 +0000 (0:00:00.607) 0:03:54.977 ******** 2026-04-02 00:28:39.569945 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-02 00:28:39.569954 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:28:39.569964 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-02 00:28:39.569972 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:28:39.569980 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-02 00:28:39.569988 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:28:39.569996 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-02 00:28:39.570004 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:28:39.570070 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-02 00:28:39.570082 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-02 00:28:39.570090 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-02 00:28:39.570097 | orchestrator | 2026-04-02 00:28:39.570105 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-04-02 00:28:39.570113 | orchestrator | Thursday 02 April 2026 00:28:27 +0000 (0:00:01.542) 0:03:56.520 ******** 2026-04-02 00:28:39.570120 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-02 00:28:39.570127 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-02 00:28:39.570134 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:28:39.570141 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-02 00:28:39.570148 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:28:39.570179 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:28:39.570187 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-02 00:28:39.570194 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:28:39.570201 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-04-02 00:28:39.570209 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-04-02 00:28:39.570216 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-04-02 00:28:39.570223 | orchestrator | 2026-04-02 00:28:39.570230 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-04-02 00:28:39.570237 | orchestrator | Thursday 02 April 2026 00:28:28 +0000 (0:00:00.696) 0:03:57.216 ******** 2026-04-02 00:28:39.570244 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:28:39.570251 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:28:39.570258 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:28:39.570265 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:28:39.570273 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:28:39.570280 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:28:39.570287 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:28:39.570294 | orchestrator | 2026-04-02 00:28:39.570301 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-04-02 00:28:39.570309 | orchestrator | Thursday 02 April 2026 00:28:28 +0000 (0:00:00.259) 0:03:57.475 ******** 2026-04-02 00:28:39.570316 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:28:39.570324 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:28:39.570330 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:28:39.570336 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:28:39.570343 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:28:39.570350 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:28:39.570357 | orchestrator | ok: [testbed-manager] 2026-04-02 00:28:39.570364 | orchestrator | 2026-04-02 00:28:39.570371 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-04-02 00:28:39.570378 | orchestrator | Thursday 02 April 2026 00:28:34 +0000 (0:00:05.221) 0:04:02.696 ******** 2026-04-02 00:28:39.570386 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-04-02 00:28:39.570393 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:28:39.570401 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-04-02 00:28:39.570408 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:28:39.570415 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-04-02 00:28:39.570422 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-04-02 00:28:39.570429 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:28:39.570436 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-04-02 00:28:39.570443 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:28:39.570451 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-04-02 00:28:39.570458 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:28:39.570465 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:28:39.570472 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-04-02 00:28:39.570479 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:28:39.570486 | orchestrator | 2026-04-02 00:28:39.570494 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-04-02 00:28:39.570501 | orchestrator | Thursday 02 April 2026 00:28:34 +0000 (0:00:00.263) 0:04:02.960 ******** 2026-04-02 00:28:39.570508 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-04-02 00:28:39.570516 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-04-02 00:28:39.570523 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-04-02 00:28:39.570545 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-04-02 00:28:39.570553 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-04-02 00:28:39.570559 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-04-02 00:28:39.570581 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-04-02 00:28:39.570588 | orchestrator | 2026-04-02 00:28:39.570595 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-04-02 00:28:39.570601 | orchestrator | Thursday 02 April 2026 00:28:35 +0000 (0:00:01.165) 0:04:04.126 ******** 2026-04-02 00:28:39.570610 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-02 00:28:39.570618 | orchestrator | 2026-04-02 00:28:39.570625 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-04-02 00:28:39.570631 | orchestrator | Thursday 02 April 2026 00:28:35 +0000 (0:00:00.410) 0:04:04.536 ******** 2026-04-02 00:28:39.570638 | orchestrator | ok: [testbed-manager] 2026-04-02 00:28:39.570643 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:28:39.570649 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:28:39.570655 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:28:39.570661 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:28:39.570667 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:28:39.570673 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:28:39.570680 | orchestrator | 2026-04-02 00:28:39.570686 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-04-02 00:28:39.570693 | orchestrator | Thursday 02 April 2026 00:28:37 +0000 (0:00:01.334) 0:04:05.871 ******** 2026-04-02 00:28:39.570700 | orchestrator | ok: [testbed-manager] 2026-04-02 00:28:39.570706 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:28:39.570712 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:28:39.570718 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:28:39.570725 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:28:39.570731 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:28:39.570769 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:28:39.570778 | orchestrator | 2026-04-02 00:28:39.570785 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-04-02 00:28:39.570793 | orchestrator | Thursday 02 April 2026 00:28:37 +0000 (0:00:00.572) 0:04:06.443 ******** 2026-04-02 00:28:39.570800 | orchestrator | changed: [testbed-manager] 2026-04-02 00:28:39.570807 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:28:39.570814 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:28:39.570821 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:28:39.570828 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:28:39.570835 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:28:39.570843 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:28:39.570850 | orchestrator | 2026-04-02 00:28:39.570856 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-04-02 00:28:39.570863 | orchestrator | Thursday 02 April 2026 00:28:38 +0000 (0:00:00.629) 0:04:07.073 ******** 2026-04-02 00:28:39.570869 | orchestrator | ok: [testbed-manager] 2026-04-02 00:28:39.570875 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:28:39.570882 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:28:39.570888 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:28:39.570895 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:28:39.570902 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:28:39.570909 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:28:39.570917 | orchestrator | 2026-04-02 00:28:39.570924 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-04-02 00:28:39.570931 | orchestrator | Thursday 02 April 2026 00:28:39 +0000 (0:00:00.646) 0:04:07.719 ******** 2026-04-02 00:28:39.570942 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775088284.5341046, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-02 00:28:39.570964 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775088313.8820446, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-02 00:28:39.570973 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775088304.742815, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-02 00:28:39.570991 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775088311.4610913, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-02 00:28:45.125551 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775088314.2653475, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-02 00:28:45.125652 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775088318.930489, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-02 00:28:45.125666 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775088309.625581, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-02 00:28:45.125676 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-02 00:28:45.125708 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-02 00:28:45.125731 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-02 00:28:45.125768 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-02 00:28:45.125801 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-02 00:28:45.125812 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-02 00:28:45.125821 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-02 00:28:45.125831 | orchestrator | 2026-04-02 00:28:45.125842 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-04-02 00:28:45.125858 | orchestrator | Thursday 02 April 2026 00:28:40 +0000 (0:00:01.002) 0:04:08.722 ******** 2026-04-02 00:28:45.125874 | orchestrator | changed: [testbed-manager] 2026-04-02 00:28:45.125886 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:28:45.125894 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:28:45.125910 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:28:45.125919 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:28:45.125928 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:28:45.125937 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:28:45.125946 | orchestrator | 2026-04-02 00:28:45.125955 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-04-02 00:28:45.125964 | orchestrator | Thursday 02 April 2026 00:28:41 +0000 (0:00:01.207) 0:04:09.930 ******** 2026-04-02 00:28:45.125972 | orchestrator | changed: [testbed-manager] 2026-04-02 00:28:45.125981 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:28:45.125990 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:28:45.125999 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:28:45.126007 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:28:45.126061 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:28:45.126070 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:28:45.126079 | orchestrator | 2026-04-02 00:28:45.126090 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-04-02 00:28:45.126101 | orchestrator | Thursday 02 April 2026 00:28:42 +0000 (0:00:01.294) 0:04:11.225 ******** 2026-04-02 00:28:45.126138 | orchestrator | changed: [testbed-manager] 2026-04-02 00:28:45.126149 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:28:45.126159 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:28:45.126169 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:28:45.126179 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:28:45.126189 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:28:45.126199 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:28:45.126209 | orchestrator | 2026-04-02 00:28:45.126219 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-04-02 00:28:45.126233 | orchestrator | Thursday 02 April 2026 00:28:43 +0000 (0:00:01.225) 0:04:12.450 ******** 2026-04-02 00:28:45.126244 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:28:45.126254 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:28:45.126265 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:28:45.126274 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:28:45.126284 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:28:45.126294 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:28:45.126304 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:28:45.126314 | orchestrator | 2026-04-02 00:28:45.126324 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-04-02 00:28:45.126334 | orchestrator | Thursday 02 April 2026 00:28:43 +0000 (0:00:00.231) 0:04:12.681 ******** 2026-04-02 00:28:45.126343 | orchestrator | ok: [testbed-manager] 2026-04-02 00:28:45.126352 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:28:45.126361 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:28:45.126369 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:28:45.126378 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:28:45.126387 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:28:45.126395 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:28:45.126404 | orchestrator | 2026-04-02 00:28:45.126413 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-04-02 00:28:45.126421 | orchestrator | Thursday 02 April 2026 00:28:44 +0000 (0:00:00.761) 0:04:13.443 ******** 2026-04-02 00:28:45.126431 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-02 00:28:45.126442 | orchestrator | 2026-04-02 00:28:45.126451 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-04-02 00:28:45.126467 | orchestrator | Thursday 02 April 2026 00:28:45 +0000 (0:00:00.365) 0:04:13.809 ******** 2026-04-02 00:30:04.909213 | orchestrator | ok: [testbed-manager] 2026-04-02 00:30:04.909331 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:30:04.909348 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:30:04.909360 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:30:04.909396 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:30:04.909407 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:30:04.909418 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:30:04.909430 | orchestrator | 2026-04-02 00:30:04.909443 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-04-02 00:30:04.909457 | orchestrator | Thursday 02 April 2026 00:28:54 +0000 (0:00:08.888) 0:04:22.697 ******** 2026-04-02 00:30:04.909468 | orchestrator | ok: [testbed-manager] 2026-04-02 00:30:04.909479 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:30:04.909490 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:30:04.909501 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:30:04.909512 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:30:04.909523 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:30:04.909534 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:30:04.909544 | orchestrator | 2026-04-02 00:30:04.909556 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-04-02 00:30:04.909567 | orchestrator | Thursday 02 April 2026 00:28:55 +0000 (0:00:01.424) 0:04:24.122 ******** 2026-04-02 00:30:04.909578 | orchestrator | ok: [testbed-manager] 2026-04-02 00:30:04.909588 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:30:04.909599 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:30:04.909610 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:30:04.909686 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:30:04.909699 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:30:04.909710 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:30:04.909720 | orchestrator | 2026-04-02 00:30:04.909731 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-04-02 00:30:04.909742 | orchestrator | Thursday 02 April 2026 00:28:56 +0000 (0:00:00.998) 0:04:25.120 ******** 2026-04-02 00:30:04.909753 | orchestrator | ok: [testbed-manager] 2026-04-02 00:30:04.909764 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:30:04.909774 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:30:04.909785 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:30:04.909796 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:30:04.909806 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:30:04.909817 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:30:04.909827 | orchestrator | 2026-04-02 00:30:04.909838 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-04-02 00:30:04.909850 | orchestrator | Thursday 02 April 2026 00:28:56 +0000 (0:00:00.280) 0:04:25.401 ******** 2026-04-02 00:30:04.909868 | orchestrator | ok: [testbed-manager] 2026-04-02 00:30:04.909885 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:30:04.909916 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:30:04.909934 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:30:04.909952 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:30:04.909969 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:30:04.909985 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:30:04.910004 | orchestrator | 2026-04-02 00:30:04.910096 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-04-02 00:30:04.910110 | orchestrator | Thursday 02 April 2026 00:28:56 +0000 (0:00:00.286) 0:04:25.687 ******** 2026-04-02 00:30:04.910120 | orchestrator | ok: [testbed-manager] 2026-04-02 00:30:04.910132 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:30:04.910142 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:30:04.910153 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:30:04.910164 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:30:04.910174 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:30:04.910185 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:30:04.910196 | orchestrator | 2026-04-02 00:30:04.910207 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-04-02 00:30:04.910218 | orchestrator | Thursday 02 April 2026 00:28:57 +0000 (0:00:00.285) 0:04:25.973 ******** 2026-04-02 00:30:04.910229 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:30:04.910240 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:30:04.910251 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:30:04.910276 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:30:04.910287 | orchestrator | ok: [testbed-manager] 2026-04-02 00:30:04.910297 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:30:04.910308 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:30:04.910319 | orchestrator | 2026-04-02 00:30:04.910332 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-04-02 00:30:04.910350 | orchestrator | Thursday 02 April 2026 00:29:01 +0000 (0:00:04.702) 0:04:30.676 ******** 2026-04-02 00:30:04.910379 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-02 00:30:04.910402 | orchestrator | 2026-04-02 00:30:04.910420 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-04-02 00:30:04.910438 | orchestrator | Thursday 02 April 2026 00:29:02 +0000 (0:00:00.386) 0:04:31.062 ******** 2026-04-02 00:30:04.910456 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-04-02 00:30:04.910474 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-04-02 00:30:04.910494 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-04-02 00:30:04.910512 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:30:04.910531 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-04-02 00:30:04.910550 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-04-02 00:30:04.910563 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-04-02 00:30:04.910573 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:30:04.910584 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-04-02 00:30:04.910595 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-04-02 00:30:04.910606 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:30:04.910686 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-04-02 00:30:04.910703 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:30:04.910714 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-04-02 00:30:04.910725 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-04-02 00:30:04.910736 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:30:04.910769 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-04-02 00:30:04.910780 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:30:04.910792 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-04-02 00:30:04.910803 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-04-02 00:30:04.910814 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:30:04.910825 | orchestrator | 2026-04-02 00:30:04.910836 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-04-02 00:30:04.910847 | orchestrator | Thursday 02 April 2026 00:29:02 +0000 (0:00:00.320) 0:04:31.382 ******** 2026-04-02 00:30:04.910858 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-02 00:30:04.910869 | orchestrator | 2026-04-02 00:30:04.910880 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-04-02 00:30:04.910891 | orchestrator | Thursday 02 April 2026 00:29:03 +0000 (0:00:00.467) 0:04:31.850 ******** 2026-04-02 00:30:04.910902 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-04-02 00:30:04.910913 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-04-02 00:30:04.910924 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:30:04.910935 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:30:04.910965 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-04-02 00:30:04.910977 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-04-02 00:30:04.910998 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:30:04.911009 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-04-02 00:30:04.911020 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:30:04.911031 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:30:04.911042 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-04-02 00:30:04.911053 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:30:04.911064 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-04-02 00:30:04.911074 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:30:04.911085 | orchestrator | 2026-04-02 00:30:04.911096 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-04-02 00:30:04.911107 | orchestrator | Thursday 02 April 2026 00:29:03 +0000 (0:00:00.282) 0:04:32.132 ******** 2026-04-02 00:30:04.911118 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-02 00:30:04.911129 | orchestrator | 2026-04-02 00:30:04.911140 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-04-02 00:30:04.911151 | orchestrator | Thursday 02 April 2026 00:29:03 +0000 (0:00:00.365) 0:04:32.497 ******** 2026-04-02 00:30:04.911162 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:30:04.911173 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:30:04.911183 | orchestrator | changed: [testbed-manager] 2026-04-02 00:30:04.911194 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:30:04.911205 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:30:04.911215 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:30:04.911224 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:30:04.911234 | orchestrator | 2026-04-02 00:30:04.911244 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-04-02 00:30:04.911253 | orchestrator | Thursday 02 April 2026 00:29:37 +0000 (0:00:33.348) 0:05:05.845 ******** 2026-04-02 00:30:04.911263 | orchestrator | changed: [testbed-manager] 2026-04-02 00:30:04.911273 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:30:04.911282 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:30:04.911292 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:30:04.911301 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:30:04.911311 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:30:04.911325 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:30:04.911334 | orchestrator | 2026-04-02 00:30:04.911344 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-04-02 00:30:04.911354 | orchestrator | Thursday 02 April 2026 00:29:46 +0000 (0:00:09.248) 0:05:15.093 ******** 2026-04-02 00:30:04.911363 | orchestrator | changed: [testbed-manager] 2026-04-02 00:30:04.911373 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:30:04.911383 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:30:04.911393 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:30:04.911402 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:30:04.911412 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:30:04.911421 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:30:04.911431 | orchestrator | 2026-04-02 00:30:04.911440 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-04-02 00:30:04.911450 | orchestrator | Thursday 02 April 2026 00:29:56 +0000 (0:00:09.649) 0:05:24.743 ******** 2026-04-02 00:30:04.911460 | orchestrator | ok: [testbed-manager] 2026-04-02 00:30:04.911469 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:30:04.911479 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:30:04.911489 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:30:04.911498 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:30:04.911508 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:30:04.911517 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:30:04.911527 | orchestrator | 2026-04-02 00:30:04.911536 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-04-02 00:30:04.911552 | orchestrator | Thursday 02 April 2026 00:29:57 +0000 (0:00:01.838) 0:05:26.581 ******** 2026-04-02 00:30:04.911562 | orchestrator | changed: [testbed-manager] 2026-04-02 00:30:04.911572 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:30:04.911582 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:30:04.911591 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:30:04.911601 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:30:04.911610 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:30:04.911661 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:30:04.911671 | orchestrator | 2026-04-02 00:30:04.911687 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-04-02 00:30:16.660084 | orchestrator | Thursday 02 April 2026 00:30:04 +0000 (0:00:07.010) 0:05:33.592 ******** 2026-04-02 00:30:16.660203 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-02 00:30:16.660222 | orchestrator | 2026-04-02 00:30:16.660234 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-04-02 00:30:16.660247 | orchestrator | Thursday 02 April 2026 00:30:05 +0000 (0:00:00.452) 0:05:34.045 ******** 2026-04-02 00:30:16.660259 | orchestrator | changed: [testbed-manager] 2026-04-02 00:30:16.660271 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:30:16.660282 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:30:16.660293 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:30:16.660303 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:30:16.660314 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:30:16.660325 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:30:16.660336 | orchestrator | 2026-04-02 00:30:16.660347 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-04-02 00:30:16.660357 | orchestrator | Thursday 02 April 2026 00:30:06 +0000 (0:00:00.737) 0:05:34.782 ******** 2026-04-02 00:30:16.660368 | orchestrator | ok: [testbed-manager] 2026-04-02 00:30:16.660380 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:30:16.660391 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:30:16.660401 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:30:16.660412 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:30:16.660423 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:30:16.660433 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:30:16.660444 | orchestrator | 2026-04-02 00:30:16.660455 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-04-02 00:30:16.660465 | orchestrator | Thursday 02 April 2026 00:30:08 +0000 (0:00:02.110) 0:05:36.892 ******** 2026-04-02 00:30:16.660476 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:30:16.660487 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:30:16.660498 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:30:16.660509 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:30:16.660519 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:30:16.660530 | orchestrator | changed: [testbed-manager] 2026-04-02 00:30:16.660543 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:30:16.660562 | orchestrator | 2026-04-02 00:30:16.660580 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-04-02 00:30:16.660599 | orchestrator | Thursday 02 April 2026 00:30:09 +0000 (0:00:00.859) 0:05:37.752 ******** 2026-04-02 00:30:16.660656 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:30:16.660676 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:30:16.660696 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:30:16.660714 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:30:16.660733 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:30:16.660752 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:30:16.660772 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:30:16.660792 | orchestrator | 2026-04-02 00:30:16.660805 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-04-02 00:30:16.660816 | orchestrator | Thursday 02 April 2026 00:30:09 +0000 (0:00:00.278) 0:05:38.030 ******** 2026-04-02 00:30:16.660852 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:30:16.660864 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:30:16.660878 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:30:16.660897 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:30:16.660916 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:30:16.660935 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:30:16.660952 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:30:16.660968 | orchestrator | 2026-04-02 00:30:16.660981 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-04-02 00:30:16.661000 | orchestrator | Thursday 02 April 2026 00:30:09 +0000 (0:00:00.375) 0:05:38.405 ******** 2026-04-02 00:30:16.661018 | orchestrator | ok: [testbed-manager] 2026-04-02 00:30:16.661037 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:30:16.661056 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:30:16.661074 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:30:16.661093 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:30:16.661119 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:30:16.661131 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:30:16.661141 | orchestrator | 2026-04-02 00:30:16.661152 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-04-02 00:30:16.661163 | orchestrator | Thursday 02 April 2026 00:30:10 +0000 (0:00:00.434) 0:05:38.840 ******** 2026-04-02 00:30:16.661174 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:30:16.661185 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:30:16.661195 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:30:16.661206 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:30:16.661217 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:30:16.661228 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:30:16.661238 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:30:16.661249 | orchestrator | 2026-04-02 00:30:16.661260 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-04-02 00:30:16.661272 | orchestrator | Thursday 02 April 2026 00:30:10 +0000 (0:00:00.253) 0:05:39.093 ******** 2026-04-02 00:30:16.661283 | orchestrator | ok: [testbed-manager] 2026-04-02 00:30:16.661293 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:30:16.661304 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:30:16.661315 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:30:16.661325 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:30:16.661336 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:30:16.661347 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:30:16.661357 | orchestrator | 2026-04-02 00:30:16.661368 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-04-02 00:30:16.661379 | orchestrator | Thursday 02 April 2026 00:30:10 +0000 (0:00:00.311) 0:05:39.404 ******** 2026-04-02 00:30:16.661390 | orchestrator | ok: [testbed-manager] =>  2026-04-02 00:30:16.661401 | orchestrator |  docker_version: 5:27.5.1 2026-04-02 00:30:16.661411 | orchestrator | ok: [testbed-node-0] =>  2026-04-02 00:30:16.661422 | orchestrator |  docker_version: 5:27.5.1 2026-04-02 00:30:16.661433 | orchestrator | ok: [testbed-node-1] =>  2026-04-02 00:30:16.661444 | orchestrator |  docker_version: 5:27.5.1 2026-04-02 00:30:16.661455 | orchestrator | ok: [testbed-node-2] =>  2026-04-02 00:30:16.661465 | orchestrator |  docker_version: 5:27.5.1 2026-04-02 00:30:16.661496 | orchestrator | ok: [testbed-node-3] =>  2026-04-02 00:30:16.661508 | orchestrator |  docker_version: 5:27.5.1 2026-04-02 00:30:16.661518 | orchestrator | ok: [testbed-node-4] =>  2026-04-02 00:30:16.661529 | orchestrator |  docker_version: 5:27.5.1 2026-04-02 00:30:16.661540 | orchestrator | ok: [testbed-node-5] =>  2026-04-02 00:30:16.661551 | orchestrator |  docker_version: 5:27.5.1 2026-04-02 00:30:16.661561 | orchestrator | 2026-04-02 00:30:16.661572 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-04-02 00:30:16.661583 | orchestrator | Thursday 02 April 2026 00:30:11 +0000 (0:00:00.285) 0:05:39.690 ******** 2026-04-02 00:30:16.661594 | orchestrator | ok: [testbed-manager] =>  2026-04-02 00:30:16.661656 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-02 00:30:16.661669 | orchestrator | ok: [testbed-node-0] =>  2026-04-02 00:30:16.661680 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-02 00:30:16.661690 | orchestrator | ok: [testbed-node-1] =>  2026-04-02 00:30:16.661701 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-02 00:30:16.661712 | orchestrator | ok: [testbed-node-2] =>  2026-04-02 00:30:16.661722 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-02 00:30:16.661733 | orchestrator | ok: [testbed-node-3] =>  2026-04-02 00:30:16.661744 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-02 00:30:16.661755 | orchestrator | ok: [testbed-node-4] =>  2026-04-02 00:30:16.661766 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-02 00:30:16.661776 | orchestrator | ok: [testbed-node-5] =>  2026-04-02 00:30:16.661787 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-02 00:30:16.661804 | orchestrator | 2026-04-02 00:30:16.661824 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-04-02 00:30:16.661844 | orchestrator | Thursday 02 April 2026 00:30:11 +0000 (0:00:00.294) 0:05:39.985 ******** 2026-04-02 00:30:16.661865 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:30:16.661887 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:30:16.661908 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:30:16.661921 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:30:16.661932 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:30:16.661942 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:30:16.661953 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:30:16.661964 | orchestrator | 2026-04-02 00:30:16.661974 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-04-02 00:30:16.661986 | orchestrator | Thursday 02 April 2026 00:30:11 +0000 (0:00:00.254) 0:05:40.240 ******** 2026-04-02 00:30:16.661996 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:30:16.662007 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:30:16.662081 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:30:16.662093 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:30:16.662104 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:30:16.662115 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:30:16.662126 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:30:16.662137 | orchestrator | 2026-04-02 00:30:16.662148 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-04-02 00:30:16.662159 | orchestrator | Thursday 02 April 2026 00:30:11 +0000 (0:00:00.274) 0:05:40.514 ******** 2026-04-02 00:30:16.662172 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-02 00:30:16.662185 | orchestrator | 2026-04-02 00:30:16.662196 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-04-02 00:30:16.662208 | orchestrator | Thursday 02 April 2026 00:30:12 +0000 (0:00:00.427) 0:05:40.942 ******** 2026-04-02 00:30:16.662218 | orchestrator | ok: [testbed-manager] 2026-04-02 00:30:16.662229 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:30:16.662240 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:30:16.662251 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:30:16.662262 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:30:16.662272 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:30:16.662283 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:30:16.662294 | orchestrator | 2026-04-02 00:30:16.662305 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-04-02 00:30:16.662316 | orchestrator | Thursday 02 April 2026 00:30:13 +0000 (0:00:00.831) 0:05:41.773 ******** 2026-04-02 00:30:16.662333 | orchestrator | ok: [testbed-manager] 2026-04-02 00:30:16.662345 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:30:16.662355 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:30:16.662366 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:30:16.662377 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:30:16.662397 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:30:16.662423 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:30:16.662433 | orchestrator | 2026-04-02 00:30:16.662462 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-04-02 00:30:16.662496 | orchestrator | Thursday 02 April 2026 00:30:16 +0000 (0:00:03.214) 0:05:44.988 ******** 2026-04-02 00:30:16.662514 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-04-02 00:30:16.662544 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-04-02 00:30:16.662562 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-04-02 00:30:16.662582 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-04-02 00:30:16.662672 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-04-02 00:30:16.662699 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-04-02 00:30:16.662716 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:30:16.662730 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-04-02 00:30:16.662743 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-04-02 00:30:16.662759 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-04-02 00:30:16.662773 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:30:16.662788 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-04-02 00:30:16.662803 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-04-02 00:30:16.662820 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:30:16.662835 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-04-02 00:30:16.662851 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-04-02 00:30:16.662890 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-04-02 00:31:21.399949 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-04-02 00:31:21.400066 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:31:21.400083 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-04-02 00:31:21.400096 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-04-02 00:31:21.400107 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:31:21.400117 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-04-02 00:31:21.400128 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:31:21.400139 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-04-02 00:31:21.400150 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-04-02 00:31:21.400160 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-04-02 00:31:21.400170 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:31:21.400181 | orchestrator | 2026-04-02 00:31:21.400192 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-04-02 00:31:21.400205 | orchestrator | Thursday 02 April 2026 00:30:16 +0000 (0:00:00.574) 0:05:45.562 ******** 2026-04-02 00:31:21.400215 | orchestrator | ok: [testbed-manager] 2026-04-02 00:31:21.400226 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:31:21.400236 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:31:21.400247 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:31:21.400258 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:31:21.400268 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:31:21.400278 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:31:21.400288 | orchestrator | 2026-04-02 00:31:21.400299 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-04-02 00:31:21.400309 | orchestrator | Thursday 02 April 2026 00:30:24 +0000 (0:00:07.599) 0:05:53.162 ******** 2026-04-02 00:31:21.400319 | orchestrator | ok: [testbed-manager] 2026-04-02 00:31:21.400330 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:31:21.400341 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:31:21.400351 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:31:21.400362 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:31:21.400373 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:31:21.400405 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:31:21.400417 | orchestrator | 2026-04-02 00:31:21.400428 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-04-02 00:31:21.400439 | orchestrator | Thursday 02 April 2026 00:30:25 +0000 (0:00:01.128) 0:05:54.290 ******** 2026-04-02 00:31:21.400449 | orchestrator | ok: [testbed-manager] 2026-04-02 00:31:21.400460 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:31:21.400470 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:31:21.400480 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:31:21.400491 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:31:21.400502 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:31:21.400533 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:31:21.400544 | orchestrator | 2026-04-02 00:31:21.400555 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-04-02 00:31:21.400565 | orchestrator | Thursday 02 April 2026 00:30:33 +0000 (0:00:08.272) 0:06:02.563 ******** 2026-04-02 00:31:21.400576 | orchestrator | changed: [testbed-manager] 2026-04-02 00:31:21.400587 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:31:21.400597 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:31:21.400608 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:31:21.400619 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:31:21.400630 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:31:21.400640 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:31:21.400650 | orchestrator | 2026-04-02 00:31:21.400660 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-04-02 00:31:21.400671 | orchestrator | Thursday 02 April 2026 00:30:37 +0000 (0:00:03.556) 0:06:06.119 ******** 2026-04-02 00:31:21.400682 | orchestrator | ok: [testbed-manager] 2026-04-02 00:31:21.400693 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:31:21.400704 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:31:21.400714 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:31:21.400725 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:31:21.400735 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:31:21.400745 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:31:21.400755 | orchestrator | 2026-04-02 00:31:21.400781 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-04-02 00:31:21.400793 | orchestrator | Thursday 02 April 2026 00:30:38 +0000 (0:00:01.338) 0:06:07.457 ******** 2026-04-02 00:31:21.400803 | orchestrator | ok: [testbed-manager] 2026-04-02 00:31:21.400814 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:31:21.400826 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:31:21.400837 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:31:21.400849 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:31:21.400859 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:31:21.400869 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:31:21.400879 | orchestrator | 2026-04-02 00:31:21.400889 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-04-02 00:31:21.400898 | orchestrator | Thursday 02 April 2026 00:30:40 +0000 (0:00:01.334) 0:06:08.792 ******** 2026-04-02 00:31:21.400908 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:31:21.400918 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:31:21.400929 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:31:21.400939 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:31:21.400949 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:31:21.400958 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:31:21.400968 | orchestrator | changed: [testbed-manager] 2026-04-02 00:31:21.400979 | orchestrator | 2026-04-02 00:31:21.400990 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-04-02 00:31:21.401001 | orchestrator | Thursday 02 April 2026 00:30:40 +0000 (0:00:00.579) 0:06:09.372 ******** 2026-04-02 00:31:21.401012 | orchestrator | ok: [testbed-manager] 2026-04-02 00:31:21.401022 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:31:21.401031 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:31:21.401050 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:31:21.401060 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:31:21.401069 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:31:21.401078 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:31:21.401087 | orchestrator | 2026-04-02 00:31:21.401096 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-04-02 00:31:21.401122 | orchestrator | Thursday 02 April 2026 00:30:51 +0000 (0:00:11.068) 0:06:20.441 ******** 2026-04-02 00:31:21.401131 | orchestrator | changed: [testbed-manager] 2026-04-02 00:31:21.401141 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:31:21.401150 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:31:21.401159 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:31:21.401168 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:31:21.401177 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:31:21.401186 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:31:21.401195 | orchestrator | 2026-04-02 00:31:21.401203 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-04-02 00:31:21.401213 | orchestrator | Thursday 02 April 2026 00:30:52 +0000 (0:00:01.123) 0:06:21.564 ******** 2026-04-02 00:31:21.401221 | orchestrator | ok: [testbed-manager] 2026-04-02 00:31:21.401230 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:31:21.401239 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:31:21.401249 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:31:21.401258 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:31:21.401267 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:31:21.401275 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:31:21.401284 | orchestrator | 2026-04-02 00:31:21.401293 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-04-02 00:31:21.401302 | orchestrator | Thursday 02 April 2026 00:31:03 +0000 (0:00:10.249) 0:06:31.814 ******** 2026-04-02 00:31:21.401311 | orchestrator | ok: [testbed-manager] 2026-04-02 00:31:21.401320 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:31:21.401329 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:31:21.401338 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:31:21.401347 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:31:21.401356 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:31:21.401365 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:31:21.401373 | orchestrator | 2026-04-02 00:31:21.401382 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-04-02 00:31:21.401391 | orchestrator | Thursday 02 April 2026 00:31:14 +0000 (0:00:11.267) 0:06:43.081 ******** 2026-04-02 00:31:21.401400 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-04-02 00:31:21.401409 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-04-02 00:31:21.401418 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-04-02 00:31:21.401428 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-04-02 00:31:21.401437 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-04-02 00:31:21.401445 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-04-02 00:31:21.401454 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-04-02 00:31:21.401463 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-04-02 00:31:21.401472 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-04-02 00:31:21.401480 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-04-02 00:31:21.401489 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-04-02 00:31:21.401498 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-04-02 00:31:21.401556 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-04-02 00:31:21.401567 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-04-02 00:31:21.401577 | orchestrator | 2026-04-02 00:31:21.401585 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-04-02 00:31:21.401594 | orchestrator | Thursday 02 April 2026 00:31:15 +0000 (0:00:01.256) 0:06:44.338 ******** 2026-04-02 00:31:21.401612 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:31:21.401621 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:31:21.401630 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:31:21.401639 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:31:21.401648 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:31:21.401657 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:31:21.401666 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:31:21.401675 | orchestrator | 2026-04-02 00:31:21.401684 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-04-02 00:31:21.401693 | orchestrator | Thursday 02 April 2026 00:31:16 +0000 (0:00:00.662) 0:06:45.000 ******** 2026-04-02 00:31:21.401702 | orchestrator | ok: [testbed-manager] 2026-04-02 00:31:21.401712 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:31:21.401721 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:31:21.401730 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:31:21.401739 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:31:21.401748 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:31:21.401756 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:31:21.401765 | orchestrator | 2026-04-02 00:31:21.401774 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-04-02 00:31:21.401785 | orchestrator | Thursday 02 April 2026 00:31:20 +0000 (0:00:04.322) 0:06:49.323 ******** 2026-04-02 00:31:21.401795 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:31:21.401804 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:31:21.401812 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:31:21.401821 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:31:21.401830 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:31:21.401839 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:31:21.401848 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:31:21.401857 | orchestrator | 2026-04-02 00:31:21.401907 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-04-02 00:31:21.401918 | orchestrator | Thursday 02 April 2026 00:31:21 +0000 (0:00:00.506) 0:06:49.830 ******** 2026-04-02 00:31:21.401928 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-04-02 00:31:21.401936 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-04-02 00:31:21.401945 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:31:21.401954 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-04-02 00:31:21.401963 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-04-02 00:31:21.401973 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:31:21.401982 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-04-02 00:31:21.401991 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-04-02 00:31:21.402000 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:31:21.402069 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-04-02 00:31:40.771380 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-04-02 00:31:40.771551 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:31:40.771581 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-04-02 00:31:40.771602 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-04-02 00:31:40.771626 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-04-02 00:31:40.771646 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-04-02 00:31:40.771666 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:31:40.771684 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:31:40.771704 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-04-02 00:31:40.771723 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-04-02 00:31:40.771742 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:31:40.771759 | orchestrator | 2026-04-02 00:31:40.771781 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-04-02 00:31:40.771838 | orchestrator | Thursday 02 April 2026 00:31:21 +0000 (0:00:00.577) 0:06:50.408 ******** 2026-04-02 00:31:40.771859 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:31:40.771880 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:31:40.771899 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:31:40.771920 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:31:40.771941 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:31:40.771961 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:31:40.771983 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:31:40.772005 | orchestrator | 2026-04-02 00:31:40.772027 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-04-02 00:31:40.772045 | orchestrator | Thursday 02 April 2026 00:31:22 +0000 (0:00:00.473) 0:06:50.881 ******** 2026-04-02 00:31:40.772058 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:31:40.772072 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:31:40.772084 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:31:40.772095 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:31:40.772106 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:31:40.772116 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:31:40.772127 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:31:40.772138 | orchestrator | 2026-04-02 00:31:40.772150 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-04-02 00:31:40.772161 | orchestrator | Thursday 02 April 2026 00:31:22 +0000 (0:00:00.633) 0:06:51.515 ******** 2026-04-02 00:31:40.772172 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:31:40.772182 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:31:40.772193 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:31:40.772204 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:31:40.772214 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:31:40.772225 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:31:40.772236 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:31:40.772247 | orchestrator | 2026-04-02 00:31:40.772258 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-04-02 00:31:40.772269 | orchestrator | Thursday 02 April 2026 00:31:23 +0000 (0:00:00.496) 0:06:52.011 ******** 2026-04-02 00:31:40.772280 | orchestrator | ok: [testbed-manager] 2026-04-02 00:31:40.772291 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:31:40.772301 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:31:40.772312 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:31:40.772323 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:31:40.772333 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:31:40.772344 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:31:40.772355 | orchestrator | 2026-04-02 00:31:40.772365 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-04-02 00:31:40.772376 | orchestrator | Thursday 02 April 2026 00:31:25 +0000 (0:00:01.903) 0:06:53.915 ******** 2026-04-02 00:31:40.772389 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-02 00:31:40.772403 | orchestrator | 2026-04-02 00:31:40.772428 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-04-02 00:31:40.772440 | orchestrator | Thursday 02 April 2026 00:31:26 +0000 (0:00:00.852) 0:06:54.768 ******** 2026-04-02 00:31:40.772451 | orchestrator | ok: [testbed-manager] 2026-04-02 00:31:40.772462 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:31:40.772552 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:31:40.772573 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:31:40.772591 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:31:40.772602 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:31:40.772613 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:31:40.772624 | orchestrator | 2026-04-02 00:31:40.772635 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-04-02 00:31:40.772659 | orchestrator | Thursday 02 April 2026 00:31:27 +0000 (0:00:01.079) 0:06:55.847 ******** 2026-04-02 00:31:40.772669 | orchestrator | ok: [testbed-manager] 2026-04-02 00:31:40.772680 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:31:40.772691 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:31:40.772701 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:31:40.772712 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:31:40.772722 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:31:40.772733 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:31:40.772744 | orchestrator | 2026-04-02 00:31:40.772754 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-04-02 00:31:40.772765 | orchestrator | Thursday 02 April 2026 00:31:28 +0000 (0:00:00.865) 0:06:56.713 ******** 2026-04-02 00:31:40.772776 | orchestrator | ok: [testbed-manager] 2026-04-02 00:31:40.772786 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:31:40.772797 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:31:40.772808 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:31:40.772818 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:31:40.772829 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:31:40.772839 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:31:40.772850 | orchestrator | 2026-04-02 00:31:40.772861 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-04-02 00:31:40.772894 | orchestrator | Thursday 02 April 2026 00:31:29 +0000 (0:00:01.356) 0:06:58.069 ******** 2026-04-02 00:31:40.772906 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:31:40.772916 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:31:40.772927 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:31:40.772938 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:31:40.772948 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:31:40.772959 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:31:40.772969 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:31:40.772980 | orchestrator | 2026-04-02 00:31:40.772990 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-04-02 00:31:40.773001 | orchestrator | Thursday 02 April 2026 00:31:30 +0000 (0:00:01.404) 0:06:59.474 ******** 2026-04-02 00:31:40.773012 | orchestrator | ok: [testbed-manager] 2026-04-02 00:31:40.773023 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:31:40.773033 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:31:40.773044 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:31:40.773054 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:31:40.773065 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:31:40.773075 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:31:40.773086 | orchestrator | 2026-04-02 00:31:40.773097 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-04-02 00:31:40.773108 | orchestrator | Thursday 02 April 2026 00:31:32 +0000 (0:00:01.500) 0:07:00.974 ******** 2026-04-02 00:31:40.773118 | orchestrator | changed: [testbed-manager] 2026-04-02 00:31:40.773129 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:31:40.773139 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:31:40.773150 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:31:40.773160 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:31:40.773171 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:31:40.773181 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:31:40.773192 | orchestrator | 2026-04-02 00:31:40.773203 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-04-02 00:31:40.773214 | orchestrator | Thursday 02 April 2026 00:31:33 +0000 (0:00:01.376) 0:07:02.351 ******** 2026-04-02 00:31:40.773225 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-02 00:31:40.773236 | orchestrator | 2026-04-02 00:31:40.773247 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-04-02 00:31:40.773258 | orchestrator | Thursday 02 April 2026 00:31:34 +0000 (0:00:00.845) 0:07:03.196 ******** 2026-04-02 00:31:40.773287 | orchestrator | ok: [testbed-manager] 2026-04-02 00:31:40.773307 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:31:40.773325 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:31:40.773337 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:31:40.773347 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:31:40.773358 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:31:40.773368 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:31:40.773379 | orchestrator | 2026-04-02 00:31:40.773390 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-04-02 00:31:40.773401 | orchestrator | Thursday 02 April 2026 00:31:35 +0000 (0:00:01.351) 0:07:04.548 ******** 2026-04-02 00:31:40.773411 | orchestrator | ok: [testbed-manager] 2026-04-02 00:31:40.773422 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:31:40.773435 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:31:40.773453 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:31:40.773496 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:31:40.773509 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:31:40.773520 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:31:40.773531 | orchestrator | 2026-04-02 00:31:40.773542 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-04-02 00:31:40.773552 | orchestrator | Thursday 02 April 2026 00:31:37 +0000 (0:00:01.429) 0:07:05.977 ******** 2026-04-02 00:31:40.773563 | orchestrator | ok: [testbed-manager] 2026-04-02 00:31:40.773574 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:31:40.773585 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:31:40.773595 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:31:40.773606 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:31:40.773617 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:31:40.773628 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:31:40.773638 | orchestrator | 2026-04-02 00:31:40.773650 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-04-02 00:31:40.773661 | orchestrator | Thursday 02 April 2026 00:31:38 +0000 (0:00:01.167) 0:07:07.145 ******** 2026-04-02 00:31:40.773672 | orchestrator | ok: [testbed-manager] 2026-04-02 00:31:40.773683 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:31:40.773693 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:31:40.773704 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:31:40.773715 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:31:40.773725 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:31:40.773736 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:31:40.773747 | orchestrator | 2026-04-02 00:31:40.773758 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-04-02 00:31:40.773768 | orchestrator | Thursday 02 April 2026 00:31:39 +0000 (0:00:01.149) 0:07:08.295 ******** 2026-04-02 00:31:40.773779 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-02 00:31:40.773790 | orchestrator | 2026-04-02 00:31:40.773801 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-02 00:31:40.773812 | orchestrator | Thursday 02 April 2026 00:31:40 +0000 (0:00:00.875) 0:07:09.170 ******** 2026-04-02 00:31:40.773823 | orchestrator | 2026-04-02 00:31:40.773834 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-02 00:31:40.773845 | orchestrator | Thursday 02 April 2026 00:31:40 +0000 (0:00:00.201) 0:07:09.371 ******** 2026-04-02 00:31:40.773855 | orchestrator | 2026-04-02 00:31:40.773866 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-02 00:31:40.773877 | orchestrator | Thursday 02 April 2026 00:31:40 +0000 (0:00:00.040) 0:07:09.412 ******** 2026-04-02 00:31:40.773888 | orchestrator | 2026-04-02 00:31:40.773899 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-02 00:31:40.773918 | orchestrator | Thursday 02 April 2026 00:31:40 +0000 (0:00:00.040) 0:07:09.452 ******** 2026-04-02 00:32:06.706175 | orchestrator | 2026-04-02 00:32:06.706305 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-02 00:32:06.706348 | orchestrator | Thursday 02 April 2026 00:31:40 +0000 (0:00:00.048) 0:07:09.500 ******** 2026-04-02 00:32:06.706361 | orchestrator | 2026-04-02 00:32:06.706373 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-02 00:32:06.706384 | orchestrator | Thursday 02 April 2026 00:31:40 +0000 (0:00:00.038) 0:07:09.539 ******** 2026-04-02 00:32:06.706395 | orchestrator | 2026-04-02 00:32:06.706406 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-02 00:32:06.706417 | orchestrator | Thursday 02 April 2026 00:31:40 +0000 (0:00:00.039) 0:07:09.578 ******** 2026-04-02 00:32:06.706481 | orchestrator | 2026-04-02 00:32:06.706494 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-04-02 00:32:06.706505 | orchestrator | Thursday 02 April 2026 00:31:40 +0000 (0:00:00.047) 0:07:09.625 ******** 2026-04-02 00:32:06.706516 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:32:06.706529 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:32:06.706542 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:32:06.706555 | orchestrator | 2026-04-02 00:32:06.706568 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-04-02 00:32:06.706580 | orchestrator | Thursday 02 April 2026 00:31:42 +0000 (0:00:01.277) 0:07:10.903 ******** 2026-04-02 00:32:06.706593 | orchestrator | changed: [testbed-manager] 2026-04-02 00:32:06.706607 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:32:06.706620 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:32:06.706633 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:32:06.706647 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:32:06.706659 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:32:06.706673 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:32:06.706686 | orchestrator | 2026-04-02 00:32:06.706698 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-04-02 00:32:06.706711 | orchestrator | Thursday 02 April 2026 00:31:43 +0000 (0:00:01.352) 0:07:12.255 ******** 2026-04-02 00:32:06.706724 | orchestrator | changed: [testbed-manager] 2026-04-02 00:32:06.706737 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:32:06.706749 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:32:06.706762 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:32:06.706782 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:32:06.706801 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:32:06.706821 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:32:06.706840 | orchestrator | 2026-04-02 00:32:06.706858 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-04-02 00:32:06.706877 | orchestrator | Thursday 02 April 2026 00:31:44 +0000 (0:00:01.209) 0:07:13.465 ******** 2026-04-02 00:32:06.706898 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:32:06.706920 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:32:06.706940 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:32:06.706958 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:32:06.706970 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:32:06.706981 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:32:06.706991 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:32:06.707002 | orchestrator | 2026-04-02 00:32:06.707013 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-04-02 00:32:06.707024 | orchestrator | Thursday 02 April 2026 00:31:47 +0000 (0:00:02.516) 0:07:15.981 ******** 2026-04-02 00:32:06.707035 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:32:06.707046 | orchestrator | 2026-04-02 00:32:06.707057 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-04-02 00:32:06.707068 | orchestrator | Thursday 02 April 2026 00:31:47 +0000 (0:00:00.106) 0:07:16.088 ******** 2026-04-02 00:32:06.707079 | orchestrator | ok: [testbed-manager] 2026-04-02 00:32:06.707089 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:32:06.707100 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:32:06.707111 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:32:06.707133 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:32:06.707144 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:32:06.707155 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:32:06.707167 | orchestrator | 2026-04-02 00:32:06.707203 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-04-02 00:32:06.707225 | orchestrator | Thursday 02 April 2026 00:31:48 +0000 (0:00:01.191) 0:07:17.280 ******** 2026-04-02 00:32:06.707244 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:32:06.707298 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:32:06.707319 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:32:06.707340 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:32:06.707359 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:32:06.707379 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:32:06.707400 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:32:06.707421 | orchestrator | 2026-04-02 00:32:06.707467 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-04-02 00:32:06.707485 | orchestrator | Thursday 02 April 2026 00:31:49 +0000 (0:00:00.510) 0:07:17.790 ******** 2026-04-02 00:32:06.707504 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-02 00:32:06.707519 | orchestrator | 2026-04-02 00:32:06.707530 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-04-02 00:32:06.707541 | orchestrator | Thursday 02 April 2026 00:31:49 +0000 (0:00:00.895) 0:07:18.685 ******** 2026-04-02 00:32:06.707552 | orchestrator | ok: [testbed-manager] 2026-04-02 00:32:06.707562 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:32:06.707573 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:32:06.707584 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:32:06.707595 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:32:06.707606 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:32:06.707616 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:32:06.707627 | orchestrator | 2026-04-02 00:32:06.707638 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-04-02 00:32:06.707649 | orchestrator | Thursday 02 April 2026 00:31:51 +0000 (0:00:01.061) 0:07:19.747 ******** 2026-04-02 00:32:06.707660 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-04-02 00:32:06.707693 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-04-02 00:32:06.707712 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-04-02 00:32:06.707729 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-04-02 00:32:06.707745 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-04-02 00:32:06.707761 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-04-02 00:32:06.707779 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-04-02 00:32:06.707797 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-04-02 00:32:06.707814 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-04-02 00:32:06.707832 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-04-02 00:32:06.707848 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-04-02 00:32:06.707867 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-04-02 00:32:06.707886 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-04-02 00:32:06.707904 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-04-02 00:32:06.707918 | orchestrator | 2026-04-02 00:32:06.707964 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-04-02 00:32:06.707983 | orchestrator | Thursday 02 April 2026 00:31:53 +0000 (0:00:02.621) 0:07:22.368 ******** 2026-04-02 00:32:06.708001 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:32:06.708019 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:32:06.708036 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:32:06.708074 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:32:06.708093 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:32:06.708113 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:32:06.708132 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:32:06.708151 | orchestrator | 2026-04-02 00:32:06.708169 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-04-02 00:32:06.708189 | orchestrator | Thursday 02 April 2026 00:31:54 +0000 (0:00:00.428) 0:07:22.796 ******** 2026-04-02 00:32:06.708210 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-02 00:32:06.708232 | orchestrator | 2026-04-02 00:32:06.708252 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-04-02 00:32:06.708270 | orchestrator | Thursday 02 April 2026 00:31:54 +0000 (0:00:00.810) 0:07:23.606 ******** 2026-04-02 00:32:06.708288 | orchestrator | ok: [testbed-manager] 2026-04-02 00:32:06.708300 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:32:06.708311 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:32:06.708322 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:32:06.708332 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:32:06.708343 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:32:06.708354 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:32:06.708365 | orchestrator | 2026-04-02 00:32:06.708376 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-04-02 00:32:06.708387 | orchestrator | Thursday 02 April 2026 00:31:55 +0000 (0:00:00.779) 0:07:24.386 ******** 2026-04-02 00:32:06.708398 | orchestrator | ok: [testbed-manager] 2026-04-02 00:32:06.708408 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:32:06.708419 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:32:06.708466 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:32:06.708477 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:32:06.708488 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:32:06.708499 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:32:06.708510 | orchestrator | 2026-04-02 00:32:06.708521 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-04-02 00:32:06.708532 | orchestrator | Thursday 02 April 2026 00:31:56 +0000 (0:00:00.779) 0:07:25.165 ******** 2026-04-02 00:32:06.708543 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:32:06.708554 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:32:06.708565 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:32:06.708586 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:32:06.708597 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:32:06.708613 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:32:06.708632 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:32:06.708644 | orchestrator | 2026-04-02 00:32:06.708655 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-04-02 00:32:06.708666 | orchestrator | Thursday 02 April 2026 00:31:56 +0000 (0:00:00.446) 0:07:25.612 ******** 2026-04-02 00:32:06.708677 | orchestrator | ok: [testbed-manager] 2026-04-02 00:32:06.708688 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:32:06.708699 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:32:06.708710 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:32:06.708720 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:32:06.708731 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:32:06.708742 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:32:06.708753 | orchestrator | 2026-04-02 00:32:06.708764 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-04-02 00:32:06.708775 | orchestrator | Thursday 02 April 2026 00:31:58 +0000 (0:00:01.551) 0:07:27.164 ******** 2026-04-02 00:32:06.708786 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:32:06.708797 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:32:06.708809 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:32:06.708819 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:32:06.708830 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:32:06.708850 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:32:06.708861 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:32:06.708872 | orchestrator | 2026-04-02 00:32:06.708883 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-04-02 00:32:06.708894 | orchestrator | Thursday 02 April 2026 00:31:59 +0000 (0:00:00.558) 0:07:27.722 ******** 2026-04-02 00:32:06.708905 | orchestrator | ok: [testbed-manager] 2026-04-02 00:32:06.708916 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:32:06.708926 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:32:06.708937 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:32:06.708948 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:32:06.708959 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:32:06.708985 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:32:40.638629 | orchestrator | 2026-04-02 00:32:40.638744 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-04-02 00:32:40.638759 | orchestrator | Thursday 02 April 2026 00:32:06 +0000 (0:00:07.736) 0:07:35.458 ******** 2026-04-02 00:32:40.638770 | orchestrator | ok: [testbed-manager] 2026-04-02 00:32:40.638781 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:32:40.638792 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:32:40.638802 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:32:40.638812 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:32:40.638821 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:32:40.638831 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:32:40.638841 | orchestrator | 2026-04-02 00:32:40.638851 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-04-02 00:32:40.638861 | orchestrator | Thursday 02 April 2026 00:32:08 +0000 (0:00:01.374) 0:07:36.833 ******** 2026-04-02 00:32:40.638871 | orchestrator | ok: [testbed-manager] 2026-04-02 00:32:40.638880 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:32:40.638890 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:32:40.638899 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:32:40.638909 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:32:40.638919 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:32:40.638929 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:32:40.638939 | orchestrator | 2026-04-02 00:32:40.638949 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-04-02 00:32:40.638958 | orchestrator | Thursday 02 April 2026 00:32:10 +0000 (0:00:02.497) 0:07:39.330 ******** 2026-04-02 00:32:40.638968 | orchestrator | ok: [testbed-manager] 2026-04-02 00:32:40.638978 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:32:40.638987 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:32:40.638997 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:32:40.639006 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:32:40.639016 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:32:40.639025 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:32:40.639034 | orchestrator | 2026-04-02 00:32:40.639044 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-02 00:32:40.639054 | orchestrator | Thursday 02 April 2026 00:32:12 +0000 (0:00:01.815) 0:07:41.146 ******** 2026-04-02 00:32:40.639064 | orchestrator | ok: [testbed-manager] 2026-04-02 00:32:40.639073 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:32:40.639083 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:32:40.639092 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:32:40.639168 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:32:40.639182 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:32:40.639193 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:32:40.639204 | orchestrator | 2026-04-02 00:32:40.639215 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-02 00:32:40.639227 | orchestrator | Thursday 02 April 2026 00:32:13 +0000 (0:00:00.828) 0:07:41.975 ******** 2026-04-02 00:32:40.639239 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:32:40.639250 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:32:40.639262 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:32:40.639299 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:32:40.639311 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:32:40.639322 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:32:40.639332 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:32:40.639343 | orchestrator | 2026-04-02 00:32:40.639354 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-04-02 00:32:40.639435 | orchestrator | Thursday 02 April 2026 00:32:14 +0000 (0:00:00.747) 0:07:42.722 ******** 2026-04-02 00:32:40.639452 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:32:40.639467 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:32:40.639478 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:32:40.639490 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:32:40.639500 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:32:40.639511 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:32:40.639522 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:32:40.639533 | orchestrator | 2026-04-02 00:32:40.639543 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-04-02 00:32:40.639553 | orchestrator | Thursday 02 April 2026 00:32:14 +0000 (0:00:00.632) 0:07:43.355 ******** 2026-04-02 00:32:40.639562 | orchestrator | ok: [testbed-manager] 2026-04-02 00:32:40.639572 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:32:40.639581 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:32:40.639591 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:32:40.639600 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:32:40.639610 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:32:40.639619 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:32:40.639629 | orchestrator | 2026-04-02 00:32:40.639646 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-04-02 00:32:40.639661 | orchestrator | Thursday 02 April 2026 00:32:15 +0000 (0:00:00.492) 0:07:43.847 ******** 2026-04-02 00:32:40.639677 | orchestrator | ok: [testbed-manager] 2026-04-02 00:32:40.639692 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:32:40.639708 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:32:40.639723 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:32:40.639738 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:32:40.639752 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:32:40.639767 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:32:40.639782 | orchestrator | 2026-04-02 00:32:40.639797 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-04-02 00:32:40.639811 | orchestrator | Thursday 02 April 2026 00:32:15 +0000 (0:00:00.518) 0:07:44.366 ******** 2026-04-02 00:32:40.639825 | orchestrator | ok: [testbed-manager] 2026-04-02 00:32:40.639840 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:32:40.639855 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:32:40.639869 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:32:40.639885 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:32:40.639901 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:32:40.639917 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:32:40.639933 | orchestrator | 2026-04-02 00:32:40.639990 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-04-02 00:32:40.640008 | orchestrator | Thursday 02 April 2026 00:32:16 +0000 (0:00:00.515) 0:07:44.881 ******** 2026-04-02 00:32:40.640025 | orchestrator | ok: [testbed-manager] 2026-04-02 00:32:40.640041 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:32:40.640057 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:32:40.640074 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:32:40.640092 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:32:40.640108 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:32:40.640148 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:32:40.640166 | orchestrator | 2026-04-02 00:32:40.640211 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-04-02 00:32:40.640230 | orchestrator | Thursday 02 April 2026 00:32:21 +0000 (0:00:05.755) 0:07:50.637 ******** 2026-04-02 00:32:40.640246 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:32:40.640263 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:32:40.640294 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:32:40.640312 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:32:40.640329 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:32:40.640345 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:32:40.640387 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:32:40.640404 | orchestrator | 2026-04-02 00:32:40.640421 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-04-02 00:32:40.640439 | orchestrator | Thursday 02 April 2026 00:32:22 +0000 (0:00:00.709) 0:07:51.346 ******** 2026-04-02 00:32:40.640459 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-02 00:32:40.640480 | orchestrator | 2026-04-02 00:32:40.640498 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-04-02 00:32:40.640515 | orchestrator | Thursday 02 April 2026 00:32:23 +0000 (0:00:00.790) 0:07:52.136 ******** 2026-04-02 00:32:40.640533 | orchestrator | ok: [testbed-manager] 2026-04-02 00:32:40.640551 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:32:40.640569 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:32:40.640587 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:32:40.640604 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:32:40.640621 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:32:40.640638 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:32:40.640655 | orchestrator | 2026-04-02 00:32:40.640672 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-04-02 00:32:40.640691 | orchestrator | Thursday 02 April 2026 00:32:25 +0000 (0:00:02.168) 0:07:54.305 ******** 2026-04-02 00:32:40.640708 | orchestrator | ok: [testbed-manager] 2026-04-02 00:32:40.640726 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:32:40.640743 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:32:40.640760 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:32:40.640779 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:32:40.640796 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:32:40.640812 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:32:40.640830 | orchestrator | 2026-04-02 00:32:40.640847 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-04-02 00:32:40.640865 | orchestrator | Thursday 02 April 2026 00:32:26 +0000 (0:00:01.365) 0:07:55.670 ******** 2026-04-02 00:32:40.640884 | orchestrator | ok: [testbed-manager] 2026-04-02 00:32:40.640901 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:32:40.640918 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:32:40.640935 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:32:40.640953 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:32:40.640970 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:32:40.640988 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:32:40.641005 | orchestrator | 2026-04-02 00:32:40.641023 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-04-02 00:32:40.641036 | orchestrator | Thursday 02 April 2026 00:32:27 +0000 (0:00:00.834) 0:07:56.504 ******** 2026-04-02 00:32:40.641052 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-02 00:32:40.641069 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-02 00:32:40.641085 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-02 00:32:40.641109 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-02 00:32:40.641126 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-02 00:32:40.641142 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-02 00:32:40.641169 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-02 00:32:40.641185 | orchestrator | 2026-04-02 00:32:40.641201 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-04-02 00:32:40.641266 | orchestrator | Thursday 02 April 2026 00:32:29 +0000 (0:00:01.773) 0:07:58.277 ******** 2026-04-02 00:32:40.641282 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-02 00:32:40.641298 | orchestrator | 2026-04-02 00:32:40.641313 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-04-02 00:32:40.641330 | orchestrator | Thursday 02 April 2026 00:32:30 +0000 (0:00:01.030) 0:07:59.308 ******** 2026-04-02 00:32:40.641347 | orchestrator | changed: [testbed-manager] 2026-04-02 00:32:40.641400 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:32:40.641416 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:32:40.641433 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:32:40.641450 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:32:40.641467 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:32:40.641482 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:32:40.641498 | orchestrator | 2026-04-02 00:32:40.641527 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-04-02 00:33:10.941133 | orchestrator | Thursday 02 April 2026 00:32:40 +0000 (0:00:10.013) 0:08:09.321 ******** 2026-04-02 00:33:10.941274 | orchestrator | ok: [testbed-manager] 2026-04-02 00:33:10.941383 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:33:10.941407 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:33:10.941426 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:33:10.941445 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:33:10.941463 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:33:10.941479 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:33:10.941496 | orchestrator | 2026-04-02 00:33:10.941515 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-04-02 00:33:10.941533 | orchestrator | Thursday 02 April 2026 00:32:42 +0000 (0:00:01.717) 0:08:11.039 ******** 2026-04-02 00:33:10.941549 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:33:10.941567 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:33:10.941583 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:33:10.941598 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:33:10.941613 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:33:10.941631 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:33:10.941648 | orchestrator | 2026-04-02 00:33:10.941668 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-04-02 00:33:10.941688 | orchestrator | Thursday 02 April 2026 00:32:43 +0000 (0:00:01.479) 0:08:12.519 ******** 2026-04-02 00:33:10.941707 | orchestrator | changed: [testbed-manager] 2026-04-02 00:33:10.941729 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:33:10.941748 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:33:10.941767 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:33:10.941785 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:33:10.941802 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:33:10.941814 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:33:10.941825 | orchestrator | 2026-04-02 00:33:10.941836 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-04-02 00:33:10.941847 | orchestrator | 2026-04-02 00:33:10.941858 | orchestrator | TASK [Include hardening role] ************************************************** 2026-04-02 00:33:10.941869 | orchestrator | Thursday 02 April 2026 00:32:45 +0000 (0:00:01.231) 0:08:13.751 ******** 2026-04-02 00:33:10.941880 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:33:10.941890 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:33:10.941932 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:33:10.941944 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:33:10.941955 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:33:10.941966 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:33:10.941976 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:33:10.941987 | orchestrator | 2026-04-02 00:33:10.941997 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-04-02 00:33:10.942008 | orchestrator | 2026-04-02 00:33:10.942087 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-04-02 00:33:10.942098 | orchestrator | Thursday 02 April 2026 00:32:45 +0000 (0:00:00.506) 0:08:14.257 ******** 2026-04-02 00:33:10.942109 | orchestrator | changed: [testbed-manager] 2026-04-02 00:33:10.942120 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:33:10.942131 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:33:10.942142 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:33:10.942162 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:33:10.942224 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:33:10.942241 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:33:10.942256 | orchestrator | 2026-04-02 00:33:10.942277 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-04-02 00:33:10.942322 | orchestrator | Thursday 02 April 2026 00:32:46 +0000 (0:00:01.311) 0:08:15.569 ******** 2026-04-02 00:33:10.942339 | orchestrator | ok: [testbed-manager] 2026-04-02 00:33:10.942357 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:33:10.942375 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:33:10.942394 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:33:10.942412 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:33:10.942432 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:33:10.942449 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:33:10.942467 | orchestrator | 2026-04-02 00:33:10.942485 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-04-02 00:33:10.942496 | orchestrator | Thursday 02 April 2026 00:32:48 +0000 (0:00:01.632) 0:08:17.201 ******** 2026-04-02 00:33:10.942507 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:33:10.942533 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:33:10.942544 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:33:10.942555 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:33:10.942568 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:33:10.942587 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:33:10.942604 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:33:10.942622 | orchestrator | 2026-04-02 00:33:10.942640 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-04-02 00:33:10.942659 | orchestrator | Thursday 02 April 2026 00:32:48 +0000 (0:00:00.466) 0:08:17.667 ******** 2026-04-02 00:33:10.942678 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-02 00:33:10.942699 | orchestrator | 2026-04-02 00:33:10.942718 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-04-02 00:33:10.942736 | orchestrator | Thursday 02 April 2026 00:32:49 +0000 (0:00:00.802) 0:08:18.470 ******** 2026-04-02 00:33:10.942756 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-02 00:33:10.942777 | orchestrator | 2026-04-02 00:33:10.942795 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-04-02 00:33:10.942813 | orchestrator | Thursday 02 April 2026 00:32:50 +0000 (0:00:00.940) 0:08:19.410 ******** 2026-04-02 00:33:10.942829 | orchestrator | changed: [testbed-manager] 2026-04-02 00:33:10.942848 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:33:10.942867 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:33:10.942885 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:33:10.942921 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:33:10.942939 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:33:10.942958 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:33:10.942970 | orchestrator | 2026-04-02 00:33:10.943006 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-04-02 00:33:10.943018 | orchestrator | Thursday 02 April 2026 00:32:59 +0000 (0:00:09.219) 0:08:28.630 ******** 2026-04-02 00:33:10.943029 | orchestrator | changed: [testbed-manager] 2026-04-02 00:33:10.943039 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:33:10.943050 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:33:10.943061 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:33:10.943071 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:33:10.943082 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:33:10.943093 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:33:10.943103 | orchestrator | 2026-04-02 00:33:10.943114 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-04-02 00:33:10.943125 | orchestrator | Thursday 02 April 2026 00:33:00 +0000 (0:00:00.814) 0:08:29.445 ******** 2026-04-02 00:33:10.943136 | orchestrator | changed: [testbed-manager] 2026-04-02 00:33:10.943147 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:33:10.943157 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:33:10.943168 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:33:10.943179 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:33:10.943189 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:33:10.943200 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:33:10.943211 | orchestrator | 2026-04-02 00:33:10.943222 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-04-02 00:33:10.943232 | orchestrator | Thursday 02 April 2026 00:33:02 +0000 (0:00:01.333) 0:08:30.778 ******** 2026-04-02 00:33:10.943243 | orchestrator | changed: [testbed-manager] 2026-04-02 00:33:10.943254 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:33:10.943264 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:33:10.943275 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:33:10.943311 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:33:10.943323 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:33:10.943333 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:33:10.943344 | orchestrator | 2026-04-02 00:33:10.943355 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-04-02 00:33:10.943366 | orchestrator | Thursday 02 April 2026 00:33:03 +0000 (0:00:01.867) 0:08:32.645 ******** 2026-04-02 00:33:10.943376 | orchestrator | changed: [testbed-manager] 2026-04-02 00:33:10.943387 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:33:10.943398 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:33:10.943408 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:33:10.943419 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:33:10.943430 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:33:10.943440 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:33:10.943451 | orchestrator | 2026-04-02 00:33:10.943462 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-04-02 00:33:10.943473 | orchestrator | Thursday 02 April 2026 00:33:05 +0000 (0:00:01.212) 0:08:33.858 ******** 2026-04-02 00:33:10.943484 | orchestrator | changed: [testbed-manager] 2026-04-02 00:33:10.943494 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:33:10.943505 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:33:10.943516 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:33:10.943527 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:33:10.943547 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:33:10.943567 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:33:10.943586 | orchestrator | 2026-04-02 00:33:10.943607 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-04-02 00:33:10.943627 | orchestrator | 2026-04-02 00:33:10.943647 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-04-02 00:33:10.943669 | orchestrator | Thursday 02 April 2026 00:33:06 +0000 (0:00:01.089) 0:08:34.948 ******** 2026-04-02 00:33:10.943701 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-02 00:33:10.943714 | orchestrator | 2026-04-02 00:33:10.943725 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-04-02 00:33:10.943736 | orchestrator | Thursday 02 April 2026 00:33:07 +0000 (0:00:00.937) 0:08:35.886 ******** 2026-04-02 00:33:10.943746 | orchestrator | ok: [testbed-manager] 2026-04-02 00:33:10.943764 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:33:10.943775 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:33:10.943786 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:33:10.943797 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:33:10.943807 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:33:10.943818 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:33:10.943829 | orchestrator | 2026-04-02 00:33:10.943840 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-04-02 00:33:10.943851 | orchestrator | Thursday 02 April 2026 00:33:08 +0000 (0:00:00.843) 0:08:36.729 ******** 2026-04-02 00:33:10.943862 | orchestrator | changed: [testbed-manager] 2026-04-02 00:33:10.943873 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:33:10.943883 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:33:10.943894 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:33:10.943905 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:33:10.943916 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:33:10.943926 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:33:10.943937 | orchestrator | 2026-04-02 00:33:10.943948 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-04-02 00:33:10.943959 | orchestrator | Thursday 02 April 2026 00:33:09 +0000 (0:00:01.268) 0:08:37.997 ******** 2026-04-02 00:33:10.943969 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-02 00:33:10.943980 | orchestrator | 2026-04-02 00:33:10.943991 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-04-02 00:33:10.944002 | orchestrator | Thursday 02 April 2026 00:33:10 +0000 (0:00:00.804) 0:08:38.802 ******** 2026-04-02 00:33:10.944012 | orchestrator | ok: [testbed-manager] 2026-04-02 00:33:10.944023 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:33:10.944034 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:33:10.944045 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:33:10.944055 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:33:10.944066 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:33:10.944076 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:33:10.944087 | orchestrator | 2026-04-02 00:33:10.944107 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-04-02 00:33:12.473969 | orchestrator | Thursday 02 April 2026 00:33:10 +0000 (0:00:00.820) 0:08:39.623 ******** 2026-04-02 00:33:12.474137 | orchestrator | changed: [testbed-manager] 2026-04-02 00:33:12.474160 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:33:12.474181 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:33:12.474201 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:33:12.474219 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:33:12.474238 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:33:12.474256 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:33:12.474275 | orchestrator | 2026-04-02 00:33:12.474323 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-02 00:33:12.474342 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-04-02 00:33:12.474361 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-04-02 00:33:12.474379 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-04-02 00:33:12.474434 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-04-02 00:33:12.474453 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-04-02 00:33:12.474471 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-04-02 00:33:12.474489 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-04-02 00:33:12.474508 | orchestrator | 2026-04-02 00:33:12.474527 | orchestrator | 2026-04-02 00:33:12.474549 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-02 00:33:12.474568 | orchestrator | Thursday 02 April 2026 00:33:12 +0000 (0:00:01.250) 0:08:40.874 ******** 2026-04-02 00:33:12.474587 | orchestrator | =============================================================================== 2026-04-02 00:33:12.474601 | orchestrator | osism.commons.packages : Install required packages --------------------- 75.41s 2026-04-02 00:33:12.474612 | orchestrator | osism.commons.packages : Download required packages -------------------- 64.57s 2026-04-02 00:33:12.474624 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 33.35s 2026-04-02 00:33:12.474635 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.03s 2026-04-02 00:33:12.474645 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 11.64s 2026-04-02 00:33:12.474656 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.27s 2026-04-02 00:33:12.474667 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 11.22s 2026-04-02 00:33:12.474679 | orchestrator | osism.services.docker : Install containerd package --------------------- 11.07s 2026-04-02 00:33:12.474690 | orchestrator | osism.services.docker : Install docker-cli package --------------------- 10.25s 2026-04-02 00:33:12.474701 | orchestrator | osism.services.lldpd : Install lldpd package --------------------------- 10.01s 2026-04-02 00:33:12.474712 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 9.65s 2026-04-02 00:33:12.474740 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 9.25s 2026-04-02 00:33:12.474751 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 9.22s 2026-04-02 00:33:12.474764 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.89s 2026-04-02 00:33:12.474774 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.27s 2026-04-02 00:33:12.474785 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.74s 2026-04-02 00:33:12.474796 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 7.60s 2026-04-02 00:33:12.474807 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 7.01s 2026-04-02 00:33:12.474819 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.76s 2026-04-02 00:33:12.474830 | orchestrator | osism.commons.sysctl : Set sysctl parameters on rabbitmq ---------------- 5.75s 2026-04-02 00:33:12.646452 | orchestrator | + osism apply fail2ban 2026-04-02 00:33:24.363895 | orchestrator | 2026-04-02 00:33:24 | INFO  | Prepare task for execution of fail2ban. 2026-04-02 00:33:24.450882 | orchestrator | 2026-04-02 00:33:24 | INFO  | Task d8c8b25d-f370-490e-b0ee-b1c5090ea926 (fail2ban) was prepared for execution. 2026-04-02 00:33:24.451009 | orchestrator | 2026-04-02 00:33:24 | INFO  | It takes a moment until task d8c8b25d-f370-490e-b0ee-b1c5090ea926 (fail2ban) has been started and output is visible here. 2026-04-02 00:33:46.049114 | orchestrator | 2026-04-02 00:33:46.049302 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-04-02 00:33:46.049349 | orchestrator | 2026-04-02 00:33:46.049361 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-04-02 00:33:46.049372 | orchestrator | Thursday 02 April 2026 00:33:27 +0000 (0:00:00.362) 0:00:00.362 ******** 2026-04-02 00:33:46.049385 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-02 00:33:46.049398 | orchestrator | 2026-04-02 00:33:46.049409 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-04-02 00:33:46.049420 | orchestrator | Thursday 02 April 2026 00:33:29 +0000 (0:00:01.166) 0:00:01.528 ******** 2026-04-02 00:33:46.049431 | orchestrator | changed: [testbed-manager] 2026-04-02 00:33:46.049443 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:33:46.049454 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:33:46.049465 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:33:46.049475 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:33:46.049486 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:33:46.049496 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:33:46.049507 | orchestrator | 2026-04-02 00:33:46.049518 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-04-02 00:33:46.049529 | orchestrator | Thursday 02 April 2026 00:33:41 +0000 (0:00:12.070) 0:00:13.599 ******** 2026-04-02 00:33:46.049539 | orchestrator | changed: [testbed-manager] 2026-04-02 00:33:46.049550 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:33:46.049561 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:33:46.049571 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:33:46.049581 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:33:46.049592 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:33:46.049603 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:33:46.049613 | orchestrator | 2026-04-02 00:33:46.049624 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-04-02 00:33:46.049633 | orchestrator | Thursday 02 April 2026 00:33:42 +0000 (0:00:01.635) 0:00:15.235 ******** 2026-04-02 00:33:46.049644 | orchestrator | ok: [testbed-manager] 2026-04-02 00:33:46.049655 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:33:46.049666 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:33:46.049677 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:33:46.049687 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:33:46.049698 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:33:46.049708 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:33:46.049719 | orchestrator | 2026-04-02 00:33:46.049730 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-04-02 00:33:46.049741 | orchestrator | Thursday 02 April 2026 00:33:44 +0000 (0:00:01.312) 0:00:16.548 ******** 2026-04-02 00:33:46.049751 | orchestrator | changed: [testbed-manager] 2026-04-02 00:33:46.049762 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:33:46.049774 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:33:46.049784 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:33:46.049795 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:33:46.049806 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:33:46.049816 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:33:46.049827 | orchestrator | 2026-04-02 00:33:46.049838 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-02 00:33:46.049849 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-02 00:33:46.049860 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-02 00:33:46.049872 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-02 00:33:46.049882 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-02 00:33:46.049916 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-02 00:33:46.049927 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-02 00:33:46.049938 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-02 00:33:46.049948 | orchestrator | 2026-04-02 00:33:46.049959 | orchestrator | 2026-04-02 00:33:46.049970 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-02 00:33:46.049981 | orchestrator | Thursday 02 April 2026 00:33:45 +0000 (0:00:01.643) 0:00:18.191 ******** 2026-04-02 00:33:46.049992 | orchestrator | =============================================================================== 2026-04-02 00:33:46.050002 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 12.07s 2026-04-02 00:33:46.050013 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.64s 2026-04-02 00:33:46.050078 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.64s 2026-04-02 00:33:46.050090 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.31s 2026-04-02 00:33:46.050101 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.17s 2026-04-02 00:33:46.219485 | orchestrator | + osism apply network 2026-04-02 00:33:57.614260 | orchestrator | 2026-04-02 00:33:57 | INFO  | Prepare task for execution of network. 2026-04-02 00:33:57.688133 | orchestrator | 2026-04-02 00:33:57 | INFO  | Task 63fb5cd1-3001-4643-a1f2-7c356330b7ad (network) was prepared for execution. 2026-04-02 00:33:57.688254 | orchestrator | 2026-04-02 00:33:57 | INFO  | It takes a moment until task 63fb5cd1-3001-4643-a1f2-7c356330b7ad (network) has been started and output is visible here. 2026-04-02 00:34:25.993418 | orchestrator | 2026-04-02 00:34:25.993569 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-04-02 00:34:25.993608 | orchestrator | 2026-04-02 00:34:25.993629 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-04-02 00:34:25.993650 | orchestrator | Thursday 02 April 2026 00:34:01 +0000 (0:00:00.379) 0:00:00.379 ******** 2026-04-02 00:34:25.993668 | orchestrator | ok: [testbed-manager] 2026-04-02 00:34:25.993687 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:34:25.993704 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:34:25.993721 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:34:25.993738 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:34:25.993755 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:34:25.993773 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:34:25.993790 | orchestrator | 2026-04-02 00:34:25.993808 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-04-02 00:34:25.993824 | orchestrator | Thursday 02 April 2026 00:34:01 +0000 (0:00:00.602) 0:00:00.981 ******** 2026-04-02 00:34:25.993864 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-02 00:34:25.993921 | orchestrator | 2026-04-02 00:34:25.993944 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-04-02 00:34:25.993961 | orchestrator | Thursday 02 April 2026 00:34:02 +0000 (0:00:01.114) 0:00:02.095 ******** 2026-04-02 00:34:25.993974 | orchestrator | ok: [testbed-manager] 2026-04-02 00:34:25.993986 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:34:25.993999 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:34:25.994011 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:34:25.994086 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:34:25.994100 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:34:25.994182 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:34:25.994204 | orchestrator | 2026-04-02 00:34:25.994223 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-04-02 00:34:25.994242 | orchestrator | Thursday 02 April 2026 00:34:05 +0000 (0:00:02.717) 0:00:04.813 ******** 2026-04-02 00:34:25.994260 | orchestrator | ok: [testbed-manager] 2026-04-02 00:34:25.994277 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:34:25.994294 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:34:25.994313 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:34:25.994332 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:34:25.994351 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:34:25.994368 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:34:25.994387 | orchestrator | 2026-04-02 00:34:25.994406 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-04-02 00:34:25.994425 | orchestrator | Thursday 02 April 2026 00:34:07 +0000 (0:00:01.598) 0:00:06.412 ******** 2026-04-02 00:34:25.994444 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-04-02 00:34:25.994465 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-04-02 00:34:25.994484 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-04-02 00:34:25.994503 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-04-02 00:34:25.994521 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-04-02 00:34:25.994540 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-04-02 00:34:25.994559 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-04-02 00:34:25.994578 | orchestrator | 2026-04-02 00:34:25.994597 | orchestrator | TASK [osism.commons.network : Write network_netplan_config_template to temporary file] *** 2026-04-02 00:34:25.994618 | orchestrator | Thursday 02 April 2026 00:34:08 +0000 (0:00:01.161) 0:00:07.573 ******** 2026-04-02 00:34:25.994636 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:34:25.994655 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:34:25.994674 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:34:25.994692 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:34:25.994711 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:34:25.994730 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:34:25.994748 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:34:25.994767 | orchestrator | 2026-04-02 00:34:25.994786 | orchestrator | TASK [osism.commons.network : Render netplan configuration from network_netplan_config_template variable] *** 2026-04-02 00:34:25.994807 | orchestrator | Thursday 02 April 2026 00:34:08 +0000 (0:00:00.579) 0:00:08.152 ******** 2026-04-02 00:34:25.994825 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:34:25.994845 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:34:25.994863 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:34:25.994882 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:34:25.994900 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:34:25.994919 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:34:25.994938 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:34:25.994956 | orchestrator | 2026-04-02 00:34:25.994994 | orchestrator | TASK [osism.commons.network : Remove temporary network_netplan_config_template file] *** 2026-04-02 00:34:25.995013 | orchestrator | Thursday 02 April 2026 00:34:09 +0000 (0:00:00.675) 0:00:08.827 ******** 2026-04-02 00:34:25.995031 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:34:25.995050 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:34:25.995068 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:34:25.995085 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:34:25.995105 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:34:25.995122 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:34:25.995165 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:34:25.995185 | orchestrator | 2026-04-02 00:34:25.995204 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-04-02 00:34:25.995223 | orchestrator | Thursday 02 April 2026 00:34:10 +0000 (0:00:00.657) 0:00:09.485 ******** 2026-04-02 00:34:25.995241 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-02 00:34:25.995275 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-02 00:34:25.995291 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-02 00:34:25.995308 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-02 00:34:25.995326 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-02 00:34:25.995345 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-02 00:34:25.995363 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-02 00:34:25.995381 | orchestrator | 2026-04-02 00:34:25.995432 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-04-02 00:34:25.995453 | orchestrator | Thursday 02 April 2026 00:34:13 +0000 (0:00:03.176) 0:00:12.662 ******** 2026-04-02 00:34:25.995473 | orchestrator | changed: [testbed-manager] 2026-04-02 00:34:25.995490 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:34:25.995509 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:34:25.995526 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:34:25.995545 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:34:25.995564 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:34:25.995583 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:34:25.995600 | orchestrator | 2026-04-02 00:34:25.995619 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-04-02 00:34:25.995636 | orchestrator | Thursday 02 April 2026 00:34:14 +0000 (0:00:01.659) 0:00:14.321 ******** 2026-04-02 00:34:25.995655 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-02 00:34:25.995673 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-02 00:34:25.995692 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-02 00:34:25.995705 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-02 00:34:25.995715 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-02 00:34:25.995726 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-02 00:34:25.995736 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-02 00:34:25.995747 | orchestrator | 2026-04-02 00:34:25.995758 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-04-02 00:34:25.995769 | orchestrator | Thursday 02 April 2026 00:34:16 +0000 (0:00:01.834) 0:00:16.156 ******** 2026-04-02 00:34:25.995779 | orchestrator | ok: [testbed-manager] 2026-04-02 00:34:25.995790 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:34:25.995801 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:34:25.995811 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:34:25.995822 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:34:25.995832 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:34:25.995843 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:34:25.995853 | orchestrator | 2026-04-02 00:34:25.995864 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-04-02 00:34:25.995875 | orchestrator | Thursday 02 April 2026 00:34:17 +0000 (0:00:01.079) 0:00:17.235 ******** 2026-04-02 00:34:25.995886 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:34:25.995896 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:34:25.995907 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:34:25.995918 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:34:25.995928 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:34:25.995939 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:34:25.995949 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:34:25.995960 | orchestrator | 2026-04-02 00:34:25.995971 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-04-02 00:34:25.995982 | orchestrator | Thursday 02 April 2026 00:34:18 +0000 (0:00:00.618) 0:00:17.853 ******** 2026-04-02 00:34:25.995992 | orchestrator | ok: [testbed-manager] 2026-04-02 00:34:25.996003 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:34:25.996013 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:34:25.996024 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:34:25.996035 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:34:25.996046 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:34:25.996056 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:34:25.996067 | orchestrator | 2026-04-02 00:34:25.996078 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-04-02 00:34:25.996099 | orchestrator | Thursday 02 April 2026 00:34:20 +0000 (0:00:02.380) 0:00:20.233 ******** 2026-04-02 00:34:25.996110 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:34:25.996121 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:34:25.996132 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:34:25.996186 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:34:25.996199 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:34:25.996209 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:34:25.996220 | orchestrator | changed: [testbed-manager] => (item={'src': '/opt/configuration/network/iptables.sh', 'dest': 'routable.d/iptables.sh'}) 2026-04-02 00:34:25.996233 | orchestrator | 2026-04-02 00:34:25.996244 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-04-02 00:34:25.996263 | orchestrator | Thursday 02 April 2026 00:34:21 +0000 (0:00:00.779) 0:00:21.013 ******** 2026-04-02 00:34:25.996274 | orchestrator | ok: [testbed-manager] 2026-04-02 00:34:25.996284 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:34:25.996295 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:34:25.996306 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:34:25.996316 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:34:25.996327 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:34:25.996337 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:34:25.996348 | orchestrator | 2026-04-02 00:34:25.996359 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-04-02 00:34:25.996370 | orchestrator | Thursday 02 April 2026 00:34:23 +0000 (0:00:01.676) 0:00:22.690 ******** 2026-04-02 00:34:25.996382 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-02 00:34:25.996395 | orchestrator | 2026-04-02 00:34:25.996406 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-04-02 00:34:25.996417 | orchestrator | Thursday 02 April 2026 00:34:24 +0000 (0:00:01.084) 0:00:23.774 ******** 2026-04-02 00:34:25.996427 | orchestrator | ok: [testbed-manager] 2026-04-02 00:34:25.996438 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:34:25.996449 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:34:25.996459 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:34:25.996470 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:34:25.996481 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:34:25.996492 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:34:25.996502 | orchestrator | 2026-04-02 00:34:25.996513 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-04-02 00:34:25.996524 | orchestrator | Thursday 02 April 2026 00:34:25 +0000 (0:00:01.102) 0:00:24.876 ******** 2026-04-02 00:34:25.996535 | orchestrator | ok: [testbed-manager] 2026-04-02 00:34:25.996545 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:34:25.996556 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:34:25.996567 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:34:25.996577 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:34:25.996597 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:34:42.088826 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:34:42.088943 | orchestrator | 2026-04-02 00:34:42.088961 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-04-02 00:34:42.088976 | orchestrator | Thursday 02 April 2026 00:34:26 +0000 (0:00:00.576) 0:00:25.453 ******** 2026-04-02 00:34:42.088988 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-04-02 00:34:42.089000 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-04-02 00:34:42.089011 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-04-02 00:34:42.089022 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-04-02 00:34:42.089033 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-02 00:34:42.089070 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-04-02 00:34:42.089082 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-02 00:34:42.089093 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-02 00:34:42.089104 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-04-02 00:34:42.089207 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-04-02 00:34:42.089229 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-02 00:34:42.089243 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-02 00:34:42.089254 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-02 00:34:42.089265 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-02 00:34:42.089276 | orchestrator | 2026-04-02 00:34:42.089287 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-04-02 00:34:42.089298 | orchestrator | Thursday 02 April 2026 00:34:27 +0000 (0:00:01.122) 0:00:26.576 ******** 2026-04-02 00:34:42.089309 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:34:42.089320 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:34:42.089330 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:34:42.089341 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:34:42.089352 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:34:42.089362 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:34:42.089375 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:34:42.089388 | orchestrator | 2026-04-02 00:34:42.089401 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-04-02 00:34:42.089413 | orchestrator | Thursday 02 April 2026 00:34:27 +0000 (0:00:00.563) 0:00:27.139 ******** 2026-04-02 00:34:42.089427 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-manager, testbed-node-1, testbed-node-0, testbed-node-2, testbed-node-5, testbed-node-3, testbed-node-4 2026-04-02 00:34:42.089443 | orchestrator | 2026-04-02 00:34:42.089456 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-04-02 00:34:42.089469 | orchestrator | Thursday 02 April 2026 00:34:32 +0000 (0:00:04.368) 0:00:31.508 ******** 2026-04-02 00:34:42.089484 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.112.5/20']}}) 2026-04-02 00:34:42.089513 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-02 00:34:42.089525 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.128.5/20']}}) 2026-04-02 00:34:42.089537 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-02 00:34:42.089549 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-02 00:34:42.089560 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-02 00:34:42.089600 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-02 00:34:42.089613 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.10/20']}}) 2026-04-02 00:34:42.089624 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': []}}) 2026-04-02 00:34:42.089636 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.13/20']}}) 2026-04-02 00:34:42.089655 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.12/20']}}) 2026-04-02 00:34:42.089666 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.14/20']}}) 2026-04-02 00:34:42.089678 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.11/20']}}) 2026-04-02 00:34:42.089689 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': ['192.168.128.15/20']}}) 2026-04-02 00:34:42.089700 | orchestrator | 2026-04-02 00:34:42.089711 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-04-02 00:34:42.089722 | orchestrator | Thursday 02 April 2026 00:34:37 +0000 (0:00:05.618) 0:00:37.127 ******** 2026-04-02 00:34:42.089734 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-02 00:34:42.089745 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.112.5/20']}}) 2026-04-02 00:34:42.089761 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.10/20']}}) 2026-04-02 00:34:42.089773 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-02 00:34:42.089784 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-02 00:34:42.089802 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-02 00:34:42.089814 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.128.5/20']}}) 2026-04-02 00:34:42.089833 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-02 00:34:54.276395 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': []}}) 2026-04-02 00:34:54.276505 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.12/20']}}) 2026-04-02 00:34:54.276522 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.13/20']}}) 2026-04-02 00:34:54.276533 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.11/20']}}) 2026-04-02 00:34:54.276543 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.14/20']}}) 2026-04-02 00:34:54.276553 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': ['192.168.128.15/20']}}) 2026-04-02 00:34:54.276564 | orchestrator | 2026-04-02 00:34:54.276575 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-04-02 00:34:54.276587 | orchestrator | Thursday 02 April 2026 00:34:43 +0000 (0:00:05.356) 0:00:42.483 ******** 2026-04-02 00:34:54.276597 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-02 00:34:54.276608 | orchestrator | 2026-04-02 00:34:54.276618 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-04-02 00:34:54.276636 | orchestrator | Thursday 02 April 2026 00:34:44 +0000 (0:00:01.291) 0:00:43.774 ******** 2026-04-02 00:34:54.276654 | orchestrator | ok: [testbed-manager] 2026-04-02 00:34:54.276676 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:34:54.276699 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:34:54.276715 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:34:54.276732 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:34:54.276747 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:34:54.276763 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:34:54.276780 | orchestrator | 2026-04-02 00:34:54.276844 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-04-02 00:34:54.276864 | orchestrator | Thursday 02 April 2026 00:34:45 +0000 (0:00:00.982) 0:00:44.756 ******** 2026-04-02 00:34:54.276881 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-02 00:34:54.276899 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-02 00:34:54.276917 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-02 00:34:54.276934 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-02 00:34:54.276949 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-02 00:34:54.276961 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-02 00:34:54.276972 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-02 00:34:54.276984 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-02 00:34:54.276996 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:34:54.277025 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-02 00:34:54.277038 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-02 00:34:54.277051 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-02 00:34:54.277063 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-02 00:34:54.277076 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:34:54.277088 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-02 00:34:54.277136 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-02 00:34:54.277149 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-02 00:34:54.277162 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-02 00:34:54.277195 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:34:54.277208 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-02 00:34:54.277220 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-02 00:34:54.277232 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-02 00:34:54.277245 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-02 00:34:54.277258 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:34:54.277270 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-02 00:34:54.277283 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-02 00:34:54.277296 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-02 00:34:54.277307 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-02 00:34:54.277318 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:34:54.277329 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:34:54.277339 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-02 00:34:54.277350 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-02 00:34:54.277361 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-02 00:34:54.277372 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-02 00:34:54.277383 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:34:54.277394 | orchestrator | 2026-04-02 00:34:54.277405 | orchestrator | TASK [osism.commons.network : Include network extra init] ********************** 2026-04-02 00:34:54.277426 | orchestrator | Thursday 02 April 2026 00:34:46 +0000 (0:00:00.882) 0:00:45.639 ******** 2026-04-02 00:34:54.277437 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/network-extra-init.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-02 00:34:54.277449 | orchestrator | 2026-04-02 00:34:54.277460 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init script] **************** 2026-04-02 00:34:54.277471 | orchestrator | Thursday 02 April 2026 00:34:47 +0000 (0:00:01.205) 0:00:46.845 ******** 2026-04-02 00:34:54.277482 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:34:54.277493 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:34:54.277503 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:34:54.277514 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:34:54.277526 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:34:54.277537 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:34:54.277548 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:34:54.277559 | orchestrator | 2026-04-02 00:34:54.277570 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init systemd service] ******* 2026-04-02 00:34:54.277581 | orchestrator | Thursday 02 April 2026 00:34:48 +0000 (0:00:00.608) 0:00:47.454 ******** 2026-04-02 00:34:54.277592 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:34:54.277603 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:34:54.277614 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:34:54.277625 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:34:54.277635 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:34:54.277646 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:34:54.277657 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:34:54.277668 | orchestrator | 2026-04-02 00:34:54.277685 | orchestrator | TASK [osism.commons.network : Enable and start network-extra-init service] ***** 2026-04-02 00:34:54.277697 | orchestrator | Thursday 02 April 2026 00:34:48 +0000 (0:00:00.724) 0:00:48.179 ******** 2026-04-02 00:34:54.277708 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:34:54.277719 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:34:54.277730 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:34:54.277741 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:34:54.277751 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:34:54.277762 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:34:54.277773 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:34:54.277784 | orchestrator | 2026-04-02 00:34:54.277795 | orchestrator | TASK [osism.commons.network : Disable and stop network-extra-init service] ***** 2026-04-02 00:34:54.277806 | orchestrator | Thursday 02 April 2026 00:34:49 +0000 (0:00:00.576) 0:00:48.755 ******** 2026-04-02 00:34:54.277817 | orchestrator | ok: [testbed-manager] 2026-04-02 00:34:54.277828 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:34:54.277839 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:34:54.277850 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:34:54.277861 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:34:54.277872 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:34:54.277883 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:34:54.277894 | orchestrator | 2026-04-02 00:34:54.277905 | orchestrator | TASK [osism.commons.network : Remove network-extra-init systemd service] ******* 2026-04-02 00:34:54.277916 | orchestrator | Thursday 02 April 2026 00:34:51 +0000 (0:00:01.713) 0:00:50.468 ******** 2026-04-02 00:34:54.277927 | orchestrator | ok: [testbed-manager] 2026-04-02 00:34:54.277938 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:34:54.277949 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:34:54.277960 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:34:54.277971 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:34:54.277981 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:34:54.277992 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:34:54.278003 | orchestrator | 2026-04-02 00:34:54.278076 | orchestrator | TASK [osism.commons.network : Remove network-extra-init script] **************** 2026-04-02 00:34:54.278114 | orchestrator | Thursday 02 April 2026 00:34:52 +0000 (0:00:01.144) 0:00:51.613 ******** 2026-04-02 00:34:54.278135 | orchestrator | ok: [testbed-manager] 2026-04-02 00:34:54.278146 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:34:54.278157 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:34:54.278167 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:34:54.278178 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:34:54.278189 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:34:54.278199 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:34:54.278245 | orchestrator | 2026-04-02 00:34:54.278266 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-04-02 00:34:55.889534 | orchestrator | Thursday 02 April 2026 00:34:54 +0000 (0:00:02.008) 0:00:53.621 ******** 2026-04-02 00:34:55.889680 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:34:55.889707 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:34:55.889726 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:34:55.889746 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:34:55.889765 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:34:55.889784 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:34:55.889805 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:34:55.889826 | orchestrator | 2026-04-02 00:34:55.889848 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-04-02 00:34:55.889869 | orchestrator | Thursday 02 April 2026 00:34:55 +0000 (0:00:00.776) 0:00:54.397 ******** 2026-04-02 00:34:55.889890 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:34:55.889911 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:34:55.889931 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:34:55.889952 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:34:55.889972 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:34:55.889991 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:34:55.890011 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:34:55.890162 | orchestrator | 2026-04-02 00:34:55.890182 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-02 00:34:55.890203 | orchestrator | testbed-manager : ok=25  changed=5  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-02 00:34:55.890229 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-02 00:34:55.890248 | orchestrator | testbed-node-1 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-02 00:34:55.890270 | orchestrator | testbed-node-2 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-02 00:34:55.890301 | orchestrator | testbed-node-3 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-02 00:34:55.890338 | orchestrator | testbed-node-4 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-02 00:34:55.890372 | orchestrator | testbed-node-5 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-02 00:34:55.890405 | orchestrator | 2026-04-02 00:34:55.890492 | orchestrator | 2026-04-02 00:34:55.890514 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-02 00:34:55.890532 | orchestrator | Thursday 02 April 2026 00:34:55 +0000 (0:00:00.524) 0:00:54.922 ******** 2026-04-02 00:34:55.890550 | orchestrator | =============================================================================== 2026-04-02 00:34:55.890570 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.62s 2026-04-02 00:34:55.890589 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.36s 2026-04-02 00:34:55.890607 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.37s 2026-04-02 00:34:55.890664 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.18s 2026-04-02 00:34:55.890684 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.72s 2026-04-02 00:34:55.890700 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.38s 2026-04-02 00:34:55.890715 | orchestrator | osism.commons.network : Remove network-extra-init script ---------------- 2.01s 2026-04-02 00:34:55.890732 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.83s 2026-04-02 00:34:55.890748 | orchestrator | osism.commons.network : Disable and stop network-extra-init service ----- 1.71s 2026-04-02 00:34:55.890765 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.68s 2026-04-02 00:34:55.890782 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.66s 2026-04-02 00:34:55.890799 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.60s 2026-04-02 00:34:55.890816 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.29s 2026-04-02 00:34:55.890833 | orchestrator | osism.commons.network : Include network extra init ---------------------- 1.21s 2026-04-02 00:34:55.890850 | orchestrator | osism.commons.network : Create required directories --------------------- 1.16s 2026-04-02 00:34:55.890865 | orchestrator | osism.commons.network : Remove network-extra-init systemd service ------- 1.14s 2026-04-02 00:34:55.890881 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.12s 2026-04-02 00:34:55.890898 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.11s 2026-04-02 00:34:55.890915 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.10s 2026-04-02 00:34:55.890931 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.08s 2026-04-02 00:34:56.070213 | orchestrator | + osism apply wireguard 2026-04-02 00:35:07.367183 | orchestrator | 2026-04-02 00:35:07 | INFO  | Prepare task for execution of wireguard. 2026-04-02 00:35:07.441316 | orchestrator | 2026-04-02 00:35:07 | INFO  | Task 40b22b6c-5628-4fb1-86aa-3ec66b831edd (wireguard) was prepared for execution. 2026-04-02 00:35:07.441415 | orchestrator | 2026-04-02 00:35:07 | INFO  | It takes a moment until task 40b22b6c-5628-4fb1-86aa-3ec66b831edd (wireguard) has been started and output is visible here. 2026-04-02 00:35:26.668542 | orchestrator | 2026-04-02 00:35:26.668653 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-04-02 00:35:26.668669 | orchestrator | 2026-04-02 00:35:26.668682 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-04-02 00:35:26.668693 | orchestrator | Thursday 02 April 2026 00:35:10 +0000 (0:00:00.305) 0:00:00.305 ******** 2026-04-02 00:35:26.668706 | orchestrator | ok: [testbed-manager] 2026-04-02 00:35:26.668718 | orchestrator | 2026-04-02 00:35:26.668730 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-04-02 00:35:26.668741 | orchestrator | Thursday 02 April 2026 00:35:12 +0000 (0:00:01.808) 0:00:02.113 ******** 2026-04-02 00:35:26.668752 | orchestrator | changed: [testbed-manager] 2026-04-02 00:35:26.668764 | orchestrator | 2026-04-02 00:35:26.668775 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-04-02 00:35:26.668787 | orchestrator | Thursday 02 April 2026 00:35:19 +0000 (0:00:06.508) 0:00:08.622 ******** 2026-04-02 00:35:26.668798 | orchestrator | changed: [testbed-manager] 2026-04-02 00:35:26.668809 | orchestrator | 2026-04-02 00:35:26.668820 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-04-02 00:35:26.668853 | orchestrator | Thursday 02 April 2026 00:35:19 +0000 (0:00:00.559) 0:00:09.181 ******** 2026-04-02 00:35:26.668865 | orchestrator | changed: [testbed-manager] 2026-04-02 00:35:26.668876 | orchestrator | 2026-04-02 00:35:26.668887 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-04-02 00:35:26.668898 | orchestrator | Thursday 02 April 2026 00:35:20 +0000 (0:00:00.423) 0:00:09.605 ******** 2026-04-02 00:35:26.668934 | orchestrator | ok: [testbed-manager] 2026-04-02 00:35:26.668949 | orchestrator | 2026-04-02 00:35:26.668967 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-04-02 00:35:26.668982 | orchestrator | Thursday 02 April 2026 00:35:20 +0000 (0:00:00.528) 0:00:10.134 ******** 2026-04-02 00:35:26.669000 | orchestrator | ok: [testbed-manager] 2026-04-02 00:35:26.669019 | orchestrator | 2026-04-02 00:35:26.669110 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-04-02 00:35:26.669134 | orchestrator | Thursday 02 April 2026 00:35:21 +0000 (0:00:00.386) 0:00:10.520 ******** 2026-04-02 00:35:26.669154 | orchestrator | ok: [testbed-manager] 2026-04-02 00:35:26.669174 | orchestrator | 2026-04-02 00:35:26.669194 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-04-02 00:35:26.669215 | orchestrator | Thursday 02 April 2026 00:35:21 +0000 (0:00:00.414) 0:00:10.935 ******** 2026-04-02 00:35:26.669236 | orchestrator | changed: [testbed-manager] 2026-04-02 00:35:26.669257 | orchestrator | 2026-04-02 00:35:26.669277 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-04-02 00:35:26.669297 | orchestrator | Thursday 02 April 2026 00:35:22 +0000 (0:00:01.209) 0:00:12.144 ******** 2026-04-02 00:35:26.669317 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-02 00:35:26.669338 | orchestrator | changed: [testbed-manager] 2026-04-02 00:35:26.669357 | orchestrator | 2026-04-02 00:35:26.669377 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-04-02 00:35:26.669397 | orchestrator | Thursday 02 April 2026 00:35:23 +0000 (0:00:00.903) 0:00:13.048 ******** 2026-04-02 00:35:26.669418 | orchestrator | changed: [testbed-manager] 2026-04-02 00:35:26.669437 | orchestrator | 2026-04-02 00:35:26.669458 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-04-02 00:35:26.669487 | orchestrator | Thursday 02 April 2026 00:35:25 +0000 (0:00:01.990) 0:00:15.038 ******** 2026-04-02 00:35:26.669508 | orchestrator | changed: [testbed-manager] 2026-04-02 00:35:26.669528 | orchestrator | 2026-04-02 00:35:26.669547 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-02 00:35:26.669566 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-02 00:35:26.669588 | orchestrator | 2026-04-02 00:35:26.669608 | orchestrator | 2026-04-02 00:35:26.669628 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-02 00:35:26.669649 | orchestrator | Thursday 02 April 2026 00:35:26 +0000 (0:00:00.915) 0:00:15.954 ******** 2026-04-02 00:35:26.669669 | orchestrator | =============================================================================== 2026-04-02 00:35:26.669689 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.51s 2026-04-02 00:35:26.669710 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.99s 2026-04-02 00:35:26.669730 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.81s 2026-04-02 00:35:26.669750 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.21s 2026-04-02 00:35:26.669770 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.92s 2026-04-02 00:35:26.669791 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.90s 2026-04-02 00:35:26.669810 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.56s 2026-04-02 00:35:26.669830 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.53s 2026-04-02 00:35:26.669851 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.42s 2026-04-02 00:35:26.669871 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.41s 2026-04-02 00:35:26.669890 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.39s 2026-04-02 00:35:26.836551 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-04-02 00:35:26.868610 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-04-02 00:35:26.868731 | orchestrator | Dload Upload Total Spent Left Speed 2026-04-02 00:35:26.943442 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 202 0 --:--:-- --:--:-- --:--:-- 205 2026-04-02 00:35:26.956304 | orchestrator | + osism apply --environment custom workarounds 2026-04-02 00:35:28.154823 | orchestrator | 2026-04-02 00:35:28 | INFO  | Trying to run play workarounds in environment custom 2026-04-02 00:35:38.225127 | orchestrator | 2026-04-02 00:35:38 | INFO  | Prepare task for execution of workarounds. 2026-04-02 00:35:38.313555 | orchestrator | 2026-04-02 00:35:38 | INFO  | Task ac5725a2-56f2-46cb-b371-628f07cb6633 (workarounds) was prepared for execution. 2026-04-02 00:35:38.313664 | orchestrator | 2026-04-02 00:35:38 | INFO  | It takes a moment until task ac5725a2-56f2-46cb-b371-628f07cb6633 (workarounds) has been started and output is visible here. 2026-04-02 00:36:03.113492 | orchestrator | 2026-04-02 00:36:03.113631 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-02 00:36:03.113657 | orchestrator | 2026-04-02 00:36:03.113676 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-04-02 00:36:03.113694 | orchestrator | Thursday 02 April 2026 00:35:41 +0000 (0:00:00.184) 0:00:00.184 ******** 2026-04-02 00:36:03.113712 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-04-02 00:36:03.113730 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-04-02 00:36:03.113748 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-04-02 00:36:03.113765 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-04-02 00:36:03.113782 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-04-02 00:36:03.113801 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-04-02 00:36:03.113820 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-04-02 00:36:03.113838 | orchestrator | 2026-04-02 00:36:03.113858 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-04-02 00:36:03.113877 | orchestrator | 2026-04-02 00:36:03.113897 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-04-02 00:36:03.113915 | orchestrator | Thursday 02 April 2026 00:35:42 +0000 (0:00:00.728) 0:00:00.913 ******** 2026-04-02 00:36:03.113935 | orchestrator | ok: [testbed-manager] 2026-04-02 00:36:03.113954 | orchestrator | 2026-04-02 00:36:03.114004 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-04-02 00:36:03.114088 | orchestrator | 2026-04-02 00:36:03.114110 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-04-02 00:36:03.114130 | orchestrator | Thursday 02 April 2026 00:35:44 +0000 (0:00:02.484) 0:00:03.398 ******** 2026-04-02 00:36:03.114151 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:36:03.114172 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:36:03.114193 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:36:03.114213 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:36:03.114235 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:36:03.114256 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:36:03.114277 | orchestrator | 2026-04-02 00:36:03.114298 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-04-02 00:36:03.114320 | orchestrator | 2026-04-02 00:36:03.114341 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-04-02 00:36:03.114384 | orchestrator | Thursday 02 April 2026 00:35:47 +0000 (0:00:02.369) 0:00:05.767 ******** 2026-04-02 00:36:03.114406 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-02 00:36:03.114428 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-02 00:36:03.114448 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-02 00:36:03.114500 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-02 00:36:03.114520 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-02 00:36:03.114540 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-02 00:36:03.114560 | orchestrator | 2026-04-02 00:36:03.114579 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-04-02 00:36:03.114598 | orchestrator | Thursday 02 April 2026 00:35:48 +0000 (0:00:01.384) 0:00:07.151 ******** 2026-04-02 00:36:03.114616 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:36:03.114635 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:36:03.114653 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:36:03.114672 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:36:03.114691 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:36:03.114711 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:36:03.114730 | orchestrator | 2026-04-02 00:36:03.114750 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-04-02 00:36:03.114771 | orchestrator | Thursday 02 April 2026 00:35:52 +0000 (0:00:03.915) 0:00:11.067 ******** 2026-04-02 00:36:03.114790 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:36:03.114810 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:36:03.114830 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:36:03.114849 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:36:03.114870 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:36:03.114890 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:36:03.114909 | orchestrator | 2026-04-02 00:36:03.114930 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-04-02 00:36:03.114950 | orchestrator | 2026-04-02 00:36:03.114995 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-04-02 00:36:03.115017 | orchestrator | Thursday 02 April 2026 00:35:52 +0000 (0:00:00.526) 0:00:11.593 ******** 2026-04-02 00:36:03.115036 | orchestrator | changed: [testbed-manager] 2026-04-02 00:36:03.115055 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:36:03.115073 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:36:03.115090 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:36:03.115108 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:36:03.115126 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:36:03.115143 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:36:03.115161 | orchestrator | 2026-04-02 00:36:03.115179 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-04-02 00:36:03.115196 | orchestrator | Thursday 02 April 2026 00:35:54 +0000 (0:00:01.793) 0:00:13.387 ******** 2026-04-02 00:36:03.115214 | orchestrator | changed: [testbed-manager] 2026-04-02 00:36:03.115232 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:36:03.115251 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:36:03.115270 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:36:03.115287 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:36:03.115305 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:36:03.115354 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:36:03.115367 | orchestrator | 2026-04-02 00:36:03.115378 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-04-02 00:36:03.115388 | orchestrator | Thursday 02 April 2026 00:35:56 +0000 (0:00:01.465) 0:00:14.852 ******** 2026-04-02 00:36:03.115403 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:36:03.115420 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:36:03.115436 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:36:03.115453 | orchestrator | ok: [testbed-manager] 2026-04-02 00:36:03.115464 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:36:03.115473 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:36:03.115483 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:36:03.115497 | orchestrator | 2026-04-02 00:36:03.115532 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-04-02 00:36:03.115543 | orchestrator | Thursday 02 April 2026 00:35:57 +0000 (0:00:01.621) 0:00:16.473 ******** 2026-04-02 00:36:03.115553 | orchestrator | changed: [testbed-manager] 2026-04-02 00:36:03.115562 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:36:03.115572 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:36:03.115581 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:36:03.115591 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:36:03.115601 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:36:03.115610 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:36:03.115620 | orchestrator | 2026-04-02 00:36:03.115630 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-04-02 00:36:03.115640 | orchestrator | Thursday 02 April 2026 00:35:59 +0000 (0:00:01.466) 0:00:17.940 ******** 2026-04-02 00:36:03.115649 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:36:03.115659 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:36:03.115669 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:36:03.115678 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:36:03.115688 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:36:03.115697 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:36:03.115707 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:36:03.115716 | orchestrator | 2026-04-02 00:36:03.115726 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-04-02 00:36:03.115736 | orchestrator | 2026-04-02 00:36:03.115745 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-04-02 00:36:03.115755 | orchestrator | Thursday 02 April 2026 00:35:59 +0000 (0:00:00.644) 0:00:18.585 ******** 2026-04-02 00:36:03.115765 | orchestrator | ok: [testbed-manager] 2026-04-02 00:36:03.115774 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:36:03.115784 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:36:03.115793 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:36:03.115803 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:36:03.115813 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:36:03.115831 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:36:03.115840 | orchestrator | 2026-04-02 00:36:03.115850 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-02 00:36:03.115861 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-02 00:36:03.115873 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-02 00:36:03.115882 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-02 00:36:03.115892 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-02 00:36:03.115902 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-02 00:36:03.115911 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-02 00:36:03.115921 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-02 00:36:03.115931 | orchestrator | 2026-04-02 00:36:03.115940 | orchestrator | 2026-04-02 00:36:03.115950 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-02 00:36:03.115960 | orchestrator | Thursday 02 April 2026 00:36:03 +0000 (0:00:03.181) 0:00:21.766 ******** 2026-04-02 00:36:03.115970 | orchestrator | =============================================================================== 2026-04-02 00:36:03.116047 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.92s 2026-04-02 00:36:03.116063 | orchestrator | Install python3-docker -------------------------------------------------- 3.18s 2026-04-02 00:36:03.116079 | orchestrator | Apply netplan configuration --------------------------------------------- 2.48s 2026-04-02 00:36:03.116095 | orchestrator | Apply netplan configuration --------------------------------------------- 2.37s 2026-04-02 00:36:03.116111 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.79s 2026-04-02 00:36:03.116127 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.62s 2026-04-02 00:36:03.116144 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.47s 2026-04-02 00:36:03.116161 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.47s 2026-04-02 00:36:03.116177 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.38s 2026-04-02 00:36:03.116193 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.73s 2026-04-02 00:36:03.116210 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.64s 2026-04-02 00:36:03.116238 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.53s 2026-04-02 00:36:03.446130 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-04-02 00:36:14.585556 | orchestrator | 2026-04-02 00:36:14 | INFO  | Prepare task for execution of reboot. 2026-04-02 00:36:14.667903 | orchestrator | 2026-04-02 00:36:14 | INFO  | Task 38dd7034-1825-4c8d-98f1-7d48dac7593e (reboot) was prepared for execution. 2026-04-02 00:36:14.668001 | orchestrator | 2026-04-02 00:36:14 | INFO  | It takes a moment until task 38dd7034-1825-4c8d-98f1-7d48dac7593e (reboot) has been started and output is visible here. 2026-04-02 00:36:25.904018 | orchestrator | 2026-04-02 00:36:25.904111 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-02 00:36:25.904120 | orchestrator | 2026-04-02 00:36:25.904124 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-02 00:36:25.904129 | orchestrator | Thursday 02 April 2026 00:36:17 +0000 (0:00:00.239) 0:00:00.239 ******** 2026-04-02 00:36:25.904133 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:36:25.904139 | orchestrator | 2026-04-02 00:36:25.904143 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-02 00:36:25.904148 | orchestrator | Thursday 02 April 2026 00:36:17 +0000 (0:00:00.145) 0:00:00.385 ******** 2026-04-02 00:36:25.904152 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:36:25.904157 | orchestrator | 2026-04-02 00:36:25.904161 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-02 00:36:25.904165 | orchestrator | Thursday 02 April 2026 00:36:19 +0000 (0:00:01.270) 0:00:01.656 ******** 2026-04-02 00:36:25.904169 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:36:25.904173 | orchestrator | 2026-04-02 00:36:25.904177 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-02 00:36:25.904182 | orchestrator | 2026-04-02 00:36:25.904186 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-02 00:36:25.904190 | orchestrator | Thursday 02 April 2026 00:36:19 +0000 (0:00:00.103) 0:00:01.759 ******** 2026-04-02 00:36:25.904194 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:36:25.904199 | orchestrator | 2026-04-02 00:36:25.904203 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-02 00:36:25.904207 | orchestrator | Thursday 02 April 2026 00:36:19 +0000 (0:00:00.099) 0:00:01.858 ******** 2026-04-02 00:36:25.904211 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:36:25.904215 | orchestrator | 2026-04-02 00:36:25.904232 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-02 00:36:25.904236 | orchestrator | Thursday 02 April 2026 00:36:20 +0000 (0:00:01.071) 0:00:02.930 ******** 2026-04-02 00:36:25.904240 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:36:25.904244 | orchestrator | 2026-04-02 00:36:25.904264 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-02 00:36:25.904270 | orchestrator | 2026-04-02 00:36:25.904274 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-02 00:36:25.904278 | orchestrator | Thursday 02 April 2026 00:36:20 +0000 (0:00:00.128) 0:00:03.058 ******** 2026-04-02 00:36:25.904282 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:36:25.904286 | orchestrator | 2026-04-02 00:36:25.904291 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-02 00:36:25.904295 | orchestrator | Thursday 02 April 2026 00:36:20 +0000 (0:00:00.098) 0:00:03.157 ******** 2026-04-02 00:36:25.904299 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:36:25.904303 | orchestrator | 2026-04-02 00:36:25.904307 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-02 00:36:25.904311 | orchestrator | Thursday 02 April 2026 00:36:21 +0000 (0:00:01.037) 0:00:04.195 ******** 2026-04-02 00:36:25.904315 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:36:25.904320 | orchestrator | 2026-04-02 00:36:25.904324 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-02 00:36:25.904328 | orchestrator | 2026-04-02 00:36:25.904332 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-02 00:36:25.904336 | orchestrator | Thursday 02 April 2026 00:36:21 +0000 (0:00:00.110) 0:00:04.305 ******** 2026-04-02 00:36:25.904340 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:36:25.904344 | orchestrator | 2026-04-02 00:36:25.904348 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-02 00:36:25.904353 | orchestrator | Thursday 02 April 2026 00:36:21 +0000 (0:00:00.097) 0:00:04.402 ******** 2026-04-02 00:36:25.904357 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:36:25.904361 | orchestrator | 2026-04-02 00:36:25.904365 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-02 00:36:25.904369 | orchestrator | Thursday 02 April 2026 00:36:23 +0000 (0:00:01.050) 0:00:05.453 ******** 2026-04-02 00:36:25.904373 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:36:25.904377 | orchestrator | 2026-04-02 00:36:25.904382 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-02 00:36:25.904386 | orchestrator | 2026-04-02 00:36:25.904390 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-02 00:36:25.904394 | orchestrator | Thursday 02 April 2026 00:36:23 +0000 (0:00:00.114) 0:00:05.567 ******** 2026-04-02 00:36:25.904398 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:36:25.904402 | orchestrator | 2026-04-02 00:36:25.904406 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-02 00:36:25.904411 | orchestrator | Thursday 02 April 2026 00:36:23 +0000 (0:00:00.207) 0:00:05.775 ******** 2026-04-02 00:36:25.904415 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:36:25.904419 | orchestrator | 2026-04-02 00:36:25.904423 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-02 00:36:25.904427 | orchestrator | Thursday 02 April 2026 00:36:24 +0000 (0:00:01.035) 0:00:06.811 ******** 2026-04-02 00:36:25.904431 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:36:25.904436 | orchestrator | 2026-04-02 00:36:25.904440 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-02 00:36:25.904444 | orchestrator | 2026-04-02 00:36:25.904448 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-02 00:36:25.904452 | orchestrator | Thursday 02 April 2026 00:36:24 +0000 (0:00:00.108) 0:00:06.919 ******** 2026-04-02 00:36:25.904456 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:36:25.904460 | orchestrator | 2026-04-02 00:36:25.904475 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-02 00:36:25.904490 | orchestrator | Thursday 02 April 2026 00:36:24 +0000 (0:00:00.110) 0:00:07.029 ******** 2026-04-02 00:36:25.904500 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:36:25.904507 | orchestrator | 2026-04-02 00:36:25.904513 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-02 00:36:25.904527 | orchestrator | Thursday 02 April 2026 00:36:25 +0000 (0:00:01.063) 0:00:08.092 ******** 2026-04-02 00:36:25.904549 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:36:25.904554 | orchestrator | 2026-04-02 00:36:25.904559 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-02 00:36:25.904565 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-02 00:36:25.904572 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-02 00:36:25.904576 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-02 00:36:25.904581 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-02 00:36:25.904586 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-02 00:36:25.904591 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-02 00:36:25.904596 | orchestrator | 2026-04-02 00:36:25.904601 | orchestrator | 2026-04-02 00:36:25.904606 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-02 00:36:25.904614 | orchestrator | Thursday 02 April 2026 00:36:25 +0000 (0:00:00.038) 0:00:08.130 ******** 2026-04-02 00:36:25.904620 | orchestrator | =============================================================================== 2026-04-02 00:36:25.904624 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 6.53s 2026-04-02 00:36:25.904629 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.76s 2026-04-02 00:36:25.904634 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.60s 2026-04-02 00:36:26.076295 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-04-02 00:36:37.442311 | orchestrator | 2026-04-02 00:36:37 | INFO  | Prepare task for execution of wait-for-connection. 2026-04-02 00:36:37.511977 | orchestrator | 2026-04-02 00:36:37 | INFO  | Task b5e2cbd5-11dc-485b-b1ae-e6028c085b07 (wait-for-connection) was prepared for execution. 2026-04-02 00:36:37.512101 | orchestrator | 2026-04-02 00:36:37 | INFO  | It takes a moment until task b5e2cbd5-11dc-485b-b1ae-e6028c085b07 (wait-for-connection) has been started and output is visible here. 2026-04-02 00:36:52.236583 | orchestrator | 2026-04-02 00:36:52.236667 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-04-02 00:36:52.236674 | orchestrator | 2026-04-02 00:36:52.236679 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-04-02 00:36:52.236684 | orchestrator | Thursday 02 April 2026 00:36:40 +0000 (0:00:00.284) 0:00:00.284 ******** 2026-04-02 00:36:52.236688 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:36:52.236693 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:36:52.236698 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:36:52.236701 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:36:52.236705 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:36:52.236710 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:36:52.236714 | orchestrator | 2026-04-02 00:36:52.236718 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-02 00:36:52.236723 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-02 00:36:52.236737 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-02 00:36:52.236761 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-02 00:36:52.236766 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-02 00:36:52.236770 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-02 00:36:52.236773 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-02 00:36:52.236777 | orchestrator | 2026-04-02 00:36:52.236781 | orchestrator | 2026-04-02 00:36:52.236785 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-02 00:36:52.236789 | orchestrator | Thursday 02 April 2026 00:36:51 +0000 (0:00:11.398) 0:00:11.682 ******** 2026-04-02 00:36:52.236793 | orchestrator | =============================================================================== 2026-04-02 00:36:52.236797 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.40s 2026-04-02 00:36:52.414267 | orchestrator | + osism apply hddtemp 2026-04-02 00:37:03.735520 | orchestrator | 2026-04-02 00:37:03 | INFO  | Prepare task for execution of hddtemp. 2026-04-02 00:37:03.806208 | orchestrator | 2026-04-02 00:37:03 | INFO  | Task cb3d99ac-e455-4c36-ab25-b608c6c9805a (hddtemp) was prepared for execution. 2026-04-02 00:37:03.806310 | orchestrator | 2026-04-02 00:37:03 | INFO  | It takes a moment until task cb3d99ac-e455-4c36-ab25-b608c6c9805a (hddtemp) has been started and output is visible here. 2026-04-02 00:37:30.710952 | orchestrator | 2026-04-02 00:37:30.711077 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-04-02 00:37:30.711096 | orchestrator | 2026-04-02 00:37:30.711109 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-04-02 00:37:30.711121 | orchestrator | Thursday 02 April 2026 00:37:06 +0000 (0:00:00.284) 0:00:00.284 ******** 2026-04-02 00:37:30.711133 | orchestrator | ok: [testbed-manager] 2026-04-02 00:37:30.711161 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:37:30.711173 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:37:30.711184 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:37:30.711195 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:37:30.711206 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:37:30.711217 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:37:30.711228 | orchestrator | 2026-04-02 00:37:30.711240 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-04-02 00:37:30.711251 | orchestrator | Thursday 02 April 2026 00:37:07 +0000 (0:00:00.540) 0:00:00.824 ******** 2026-04-02 00:37:30.711264 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-02 00:37:30.711277 | orchestrator | 2026-04-02 00:37:30.711288 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-04-02 00:37:30.711300 | orchestrator | Thursday 02 April 2026 00:37:08 +0000 (0:00:01.033) 0:00:01.857 ******** 2026-04-02 00:37:30.711310 | orchestrator | ok: [testbed-manager] 2026-04-02 00:37:30.711339 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:37:30.711350 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:37:30.711361 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:37:30.711372 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:37:30.711383 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:37:30.711394 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:37:30.711405 | orchestrator | 2026-04-02 00:37:30.711418 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-04-02 00:37:30.711432 | orchestrator | Thursday 02 April 2026 00:37:10 +0000 (0:00:02.493) 0:00:04.351 ******** 2026-04-02 00:37:30.711444 | orchestrator | changed: [testbed-manager] 2026-04-02 00:37:30.711481 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:37:30.711495 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:37:30.711507 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:37:30.711520 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:37:30.711532 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:37:30.711545 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:37:30.711557 | orchestrator | 2026-04-02 00:37:30.711570 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-04-02 00:37:30.711583 | orchestrator | Thursday 02 April 2026 00:37:11 +0000 (0:00:00.913) 0:00:05.264 ******** 2026-04-02 00:37:30.711596 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:37:30.711608 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:37:30.711621 | orchestrator | ok: [testbed-manager] 2026-04-02 00:37:30.711633 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:37:30.711646 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:37:30.711658 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:37:30.711670 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:37:30.711683 | orchestrator | 2026-04-02 00:37:30.711698 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-04-02 00:37:30.711711 | orchestrator | Thursday 02 April 2026 00:37:13 +0000 (0:00:01.289) 0:00:06.554 ******** 2026-04-02 00:37:30.711722 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:37:30.711733 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:37:30.711744 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:37:30.711755 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:37:30.711766 | orchestrator | changed: [testbed-manager] 2026-04-02 00:37:30.711776 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:37:30.711787 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:37:30.711798 | orchestrator | 2026-04-02 00:37:30.711809 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-04-02 00:37:30.711820 | orchestrator | Thursday 02 April 2026 00:37:13 +0000 (0:00:00.537) 0:00:07.092 ******** 2026-04-02 00:37:30.711853 | orchestrator | changed: [testbed-manager] 2026-04-02 00:37:30.711864 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:37:30.711875 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:37:30.711886 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:37:30.711897 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:37:30.711907 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:37:30.711918 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:37:30.711929 | orchestrator | 2026-04-02 00:37:30.711940 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-04-02 00:37:30.711951 | orchestrator | Thursday 02 April 2026 00:37:27 +0000 (0:00:13.721) 0:00:20.813 ******** 2026-04-02 00:37:30.711963 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-02 00:37:30.711974 | orchestrator | 2026-04-02 00:37:30.711985 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-04-02 00:37:30.711995 | orchestrator | Thursday 02 April 2026 00:37:28 +0000 (0:00:01.173) 0:00:21.986 ******** 2026-04-02 00:37:30.712006 | orchestrator | changed: [testbed-manager] 2026-04-02 00:37:30.712017 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:37:30.712028 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:37:30.712038 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:37:30.712049 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:37:30.712060 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:37:30.712070 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:37:30.712081 | orchestrator | 2026-04-02 00:37:30.712092 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-02 00:37:30.712103 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-02 00:37:30.712133 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-02 00:37:30.712170 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-02 00:37:30.712183 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-02 00:37:30.712194 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-02 00:37:30.712205 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-02 00:37:30.712215 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-02 00:37:30.712226 | orchestrator | 2026-04-02 00:37:30.712237 | orchestrator | 2026-04-02 00:37:30.712248 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-02 00:37:30.712259 | orchestrator | Thursday 02 April 2026 00:37:30 +0000 (0:00:01.924) 0:00:23.911 ******** 2026-04-02 00:37:30.712270 | orchestrator | =============================================================================== 2026-04-02 00:37:30.712281 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 13.72s 2026-04-02 00:37:30.712292 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.49s 2026-04-02 00:37:30.712302 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.92s 2026-04-02 00:37:30.712313 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.29s 2026-04-02 00:37:30.712324 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.17s 2026-04-02 00:37:30.712335 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.03s 2026-04-02 00:37:30.712346 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 0.91s 2026-04-02 00:37:30.712356 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.54s 2026-04-02 00:37:30.712367 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.54s 2026-04-02 00:37:30.890763 | orchestrator | ++ semver latest 7.1.1 2026-04-02 00:37:30.947890 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-02 00:37:30.947991 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-02 00:37:30.948008 | orchestrator | + sudo systemctl restart manager.service 2026-04-02 00:37:44.743977 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-02 00:37:44.744085 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-04-02 00:37:44.744100 | orchestrator | + local max_attempts=60 2026-04-02 00:37:44.744110 | orchestrator | + local name=ceph-ansible 2026-04-02 00:37:44.744119 | orchestrator | + local attempt_num=1 2026-04-02 00:37:44.744128 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-02 00:37:44.771143 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-02 00:37:44.771224 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-02 00:37:44.771235 | orchestrator | + sleep 5 2026-04-02 00:37:49.774504 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-02 00:37:49.797786 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-02 00:37:49.797909 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-02 00:37:49.797926 | orchestrator | + sleep 5 2026-04-02 00:37:54.800863 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-02 00:37:54.841113 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-02 00:37:54.841234 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-02 00:37:54.841257 | orchestrator | + sleep 5 2026-04-02 00:37:59.844437 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-02 00:37:59.878153 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-02 00:37:59.878236 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-02 00:37:59.878247 | orchestrator | + sleep 5 2026-04-02 00:38:04.882249 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-02 00:38:04.916699 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-02 00:38:04.916845 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-02 00:38:04.916862 | orchestrator | + sleep 5 2026-04-02 00:38:09.921448 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-02 00:38:09.956219 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-02 00:38:09.956312 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-02 00:38:09.956327 | orchestrator | + sleep 5 2026-04-02 00:38:14.960227 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-02 00:38:14.994477 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-02 00:38:14.994572 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-02 00:38:14.994591 | orchestrator | + sleep 5 2026-04-02 00:38:19.999321 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-02 00:38:20.045332 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-02 00:38:20.045448 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-02 00:38:20.045464 | orchestrator | + sleep 5 2026-04-02 00:38:25.059499 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-02 00:38:25.095323 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-02 00:38:25.095410 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-02 00:38:25.095423 | orchestrator | + sleep 5 2026-04-02 00:38:30.100651 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-02 00:38:30.142268 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-02 00:38:30.142332 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-02 00:38:30.142338 | orchestrator | + sleep 5 2026-04-02 00:38:35.147785 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-02 00:38:35.186483 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-02 00:38:35.186579 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-02 00:38:35.186595 | orchestrator | + sleep 5 2026-04-02 00:38:40.192096 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-02 00:38:40.229759 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-02 00:38:40.229864 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-02 00:38:40.229879 | orchestrator | + sleep 5 2026-04-02 00:38:45.233222 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-02 00:38:45.271698 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-02 00:38:45.271850 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-02 00:38:45.271866 | orchestrator | + sleep 5 2026-04-02 00:38:50.275847 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-02 00:38:50.304882 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-02 00:38:50.305861 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-04-02 00:38:50.306100 | orchestrator | + local max_attempts=60 2026-04-02 00:38:50.306127 | orchestrator | + local name=kolla-ansible 2026-04-02 00:38:50.306139 | orchestrator | + local attempt_num=1 2026-04-02 00:38:50.306162 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-04-02 00:38:50.332459 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-02 00:38:50.332550 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-04-02 00:38:50.332563 | orchestrator | + local max_attempts=60 2026-04-02 00:38:50.332575 | orchestrator | + local name=osism-ansible 2026-04-02 00:38:50.332586 | orchestrator | + local attempt_num=1 2026-04-02 00:38:50.332608 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-04-02 00:38:50.363744 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-02 00:38:50.363846 | orchestrator | + [[ true == \t\r\u\e ]] 2026-04-02 00:38:50.363860 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-04-02 00:38:50.510137 | orchestrator | ARA in ceph-ansible already disabled. 2026-04-02 00:38:50.636350 | orchestrator | ARA in kolla-ansible already disabled. 2026-04-02 00:38:50.773098 | orchestrator | ARA in osism-ansible already disabled. 2026-04-02 00:38:50.903219 | orchestrator | ARA in osism-kubernetes already disabled. 2026-04-02 00:38:50.903665 | orchestrator | + osism apply gather-facts 2026-04-02 00:39:02.122918 | orchestrator | 2026-04-02 00:39:02 | INFO  | Prepare task for execution of gather-facts. 2026-04-02 00:39:02.193794 | orchestrator | 2026-04-02 00:39:02 | INFO  | Task 6b878a93-2350-426b-a33e-6e198ab9c238 (gather-facts) was prepared for execution. 2026-04-02 00:39:02.193954 | orchestrator | 2026-04-02 00:39:02 | INFO  | It takes a moment until task 6b878a93-2350-426b-a33e-6e198ab9c238 (gather-facts) has been started and output is visible here. 2026-04-02 00:39:05.865556 | orchestrator | [WARNING]: Invalid characters were found in group names but not replaced, use 2026-04-02 00:39:05.865750 | orchestrator | -vvvv to see details 2026-04-02 00:39:05.865767 | orchestrator | 2026-04-02 00:39:05.865777 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-02 00:39:05.865787 | orchestrator | 2026-04-02 00:39:05.865796 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-02 00:39:05.865817 | orchestrator | fatal: [testbed-manager]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"192.168.16.5\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.5: Permission denied (publickey).\r\n", "unreachable": true} 2026-04-02 00:39:05.865839 | orchestrator | ...ignoring 2026-04-02 00:39:05.865848 | orchestrator | fatal: [testbed-node-0]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"192.168.16.10\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.10: Permission denied (publickey).\r\n", "unreachable": true} 2026-04-02 00:39:05.865857 | orchestrator | ...ignoring 2026-04-02 00:39:05.865865 | orchestrator | fatal: [testbed-node-5]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"192.168.16.15\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.15: Permission denied (publickey).\r\n", "unreachable": true} 2026-04-02 00:39:05.865873 | orchestrator | ...ignoring 2026-04-02 00:39:05.865882 | orchestrator | fatal: [testbed-node-1]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"192.168.16.11\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.11: Permission denied (publickey).\r\n", "unreachable": true} 2026-04-02 00:39:05.865891 | orchestrator | ...ignoring 2026-04-02 00:39:05.865900 | orchestrator | fatal: [testbed-node-3]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"192.168.16.13\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.13: Permission denied (publickey).\r\n", "unreachable": true} 2026-04-02 00:39:05.865908 | orchestrator | ...ignoring 2026-04-02 00:39:05.865917 | orchestrator | fatal: [testbed-node-2]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"192.168.16.12\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.12: Permission denied (publickey).\r\n", "unreachable": true} 2026-04-02 00:39:05.865925 | orchestrator | ...ignoring 2026-04-02 00:39:05.865934 | orchestrator | fatal: [testbed-node-4]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"192.168.16.14\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.14: Permission denied (publickey).\r\n", "unreachable": true} 2026-04-02 00:39:05.865942 | orchestrator | ...ignoring 2026-04-02 00:39:05.865950 | orchestrator | 2026-04-02 00:39:05.865958 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-02 00:39:05.865966 | orchestrator | 2026-04-02 00:39:05.865974 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-02 00:39:05.865982 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:39:05.866009 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:39:05.866069 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:39:05.866099 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:39:05.866108 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:39:05.866116 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:39:05.866124 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:39:05.866132 | orchestrator | 2026-04-02 00:39:05.866140 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-02 00:39:05.866149 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-04-02 00:39:05.866185 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-04-02 00:39:05.866194 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-04-02 00:39:05.866213 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-04-02 00:39:05.866238 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-04-02 00:39:05.866246 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-04-02 00:39:05.866255 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-04-02 00:39:05.866263 | orchestrator | 2026-04-02 00:39:06.031550 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-04-02 00:39:06.045255 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-04-02 00:39:06.054966 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-04-02 00:39:06.065762 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-04-02 00:39:06.075587 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-04-02 00:39:06.091920 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/320-openstack-minimal.sh /usr/local/bin/deploy-openstack-minimal 2026-04-02 00:39:06.102493 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-04-02 00:39:06.113047 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-04-02 00:39:06.122612 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-04-02 00:39:06.138154 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade-manager.sh /usr/local/bin/upgrade-manager 2026-04-02 00:39:06.147656 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-04-02 00:39:06.161132 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-04-02 00:39:06.170124 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-04-02 00:39:06.182880 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-04-02 00:39:06.193341 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/320-openstack-minimal.sh /usr/local/bin/upgrade-openstack-minimal 2026-04-02 00:39:06.202883 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-04-02 00:39:06.219794 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-04-02 00:39:06.229036 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-04-02 00:39:06.242895 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-04-02 00:39:06.258429 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-04-02 00:39:06.275612 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-04-02 00:39:06.286950 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-04-02 00:39:06.307381 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-04-02 00:39:06.326309 | orchestrator | + [[ false == \t\r\u\e ]] 2026-04-02 00:39:06.711310 | orchestrator | ok: Runtime: 0:23:49.508961 2026-04-02 00:39:06.814386 | 2026-04-02 00:39:06.814531 | TASK [Deploy services] 2026-04-02 00:39:07.347415 | orchestrator | skipping: Conditional result was False 2026-04-02 00:39:07.364645 | 2026-04-02 00:39:07.364801 | TASK [Deploy in a nutshell] 2026-04-02 00:39:08.080461 | orchestrator | + set -e 2026-04-02 00:39:08.080655 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-02 00:39:08.080691 | orchestrator | ++ export INTERACTIVE=false 2026-04-02 00:39:08.080708 | orchestrator | ++ INTERACTIVE=false 2026-04-02 00:39:08.080719 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-02 00:39:08.080728 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-02 00:39:08.080750 | orchestrator | + source /opt/manager-vars.sh 2026-04-02 00:39:08.080785 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-02 00:39:08.080807 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-02 00:39:08.080817 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-02 00:39:08.080829 | orchestrator | ++ CEPH_VERSION=reef 2026-04-02 00:39:08.080838 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-02 00:39:08.080851 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-02 00:39:08.080859 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-02 00:39:08.080875 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-02 00:39:08.080887 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-02 00:39:08.080898 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-02 00:39:08.080906 | orchestrator | ++ export ARA=false 2026-04-02 00:39:08.080914 | orchestrator | ++ ARA=false 2026-04-02 00:39:08.080922 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-02 00:39:08.080931 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-02 00:39:08.080939 | orchestrator | ++ export TEMPEST=true 2026-04-02 00:39:08.080947 | orchestrator | ++ TEMPEST=true 2026-04-02 00:39:08.080955 | orchestrator | ++ export IS_ZUUL=true 2026-04-02 00:39:08.080963 | orchestrator | ++ IS_ZUUL=true 2026-04-02 00:39:08.080974 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.251 2026-04-02 00:39:08.080983 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.251 2026-04-02 00:39:08.080991 | orchestrator | ++ export EXTERNAL_API=false 2026-04-02 00:39:08.080999 | orchestrator | ++ EXTERNAL_API=false 2026-04-02 00:39:08.081007 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-02 00:39:08.081015 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-02 00:39:08.081022 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-02 00:39:08.081030 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-02 00:39:08.081039 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-02 00:39:08.081047 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-02 00:39:08.081057 | orchestrator | + echo 2026-04-02 00:39:08.081191 | orchestrator | 2026-04-02 00:39:08.081204 | orchestrator | # PULL IMAGES 2026-04-02 00:39:08.081213 | orchestrator | 2026-04-02 00:39:08.081221 | orchestrator | + echo '# PULL IMAGES' 2026-04-02 00:39:08.081229 | orchestrator | + echo 2026-04-02 00:39:08.082462 | orchestrator | ++ semver latest 7.0.0 2026-04-02 00:39:08.137897 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-02 00:39:08.137942 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-02 00:39:08.137962 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-04-02 00:39:09.364602 | orchestrator | 2026-04-02 00:39:09 | INFO  | Trying to run play pull-images in environment custom 2026-04-02 00:39:19.439511 | orchestrator | 2026-04-02 00:39:19 | INFO  | Prepare task for execution of pull-images. 2026-04-02 00:39:19.508830 | orchestrator | 2026-04-02 00:39:19 | INFO  | Task 4f044873-d899-4557-a958-30f2c8a86a57 (pull-images) was prepared for execution. 2026-04-02 00:39:19.509032 | orchestrator | 2026-04-02 00:39:19 | INFO  | Task 4f044873-d899-4557-a958-30f2c8a86a57 is running in background. No more output. Check ARA for logs. 2026-04-02 00:39:20.843230 | orchestrator | 2026-04-02 00:39:20 | INFO  | Trying to run play wipe-partitions in environment custom 2026-04-02 00:39:30.975259 | orchestrator | 2026-04-02 00:39:30 | INFO  | Prepare task for execution of wipe-partitions. 2026-04-02 00:39:31.053228 | orchestrator | 2026-04-02 00:39:31 | INFO  | Task 104eb7eb-3713-47dc-86e8-9d9c7d425836 (wipe-partitions) was prepared for execution. 2026-04-02 00:39:31.053325 | orchestrator | 2026-04-02 00:39:31 | INFO  | It takes a moment until task 104eb7eb-3713-47dc-86e8-9d9c7d425836 (wipe-partitions) has been started and output is visible here. 2026-04-02 00:39:42.891258 | orchestrator | 2026-04-02 00:39:42.891375 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-04-02 00:39:42.891403 | orchestrator | 2026-04-02 00:39:42.891431 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-04-02 00:39:42.891469 | orchestrator | Thursday 02 April 2026 00:39:34 +0000 (0:00:00.158) 0:00:00.158 ******** 2026-04-02 00:39:42.891526 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:39:42.891582 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:39:42.891601 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:39:42.891616 | orchestrator | 2026-04-02 00:39:42.891668 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-04-02 00:39:42.891687 | orchestrator | Thursday 02 April 2026 00:39:35 +0000 (0:00:01.263) 0:00:01.421 ******** 2026-04-02 00:39:42.891709 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:39:42.891727 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:39:42.891745 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:39:42.891765 | orchestrator | 2026-04-02 00:39:42.891785 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-04-02 00:39:42.891806 | orchestrator | Thursday 02 April 2026 00:39:35 +0000 (0:00:00.261) 0:00:01.683 ******** 2026-04-02 00:39:42.891825 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:39:42.891845 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:39:42.891864 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:39:42.891883 | orchestrator | 2026-04-02 00:39:42.891901 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-04-02 00:39:42.891920 | orchestrator | Thursday 02 April 2026 00:39:36 +0000 (0:00:00.581) 0:00:02.264 ******** 2026-04-02 00:39:42.891938 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:39:42.891956 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:39:42.891976 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:39:42.891994 | orchestrator | 2026-04-02 00:39:42.892014 | orchestrator | TASK [Check device availability] *********************************************** 2026-04-02 00:39:42.892035 | orchestrator | Thursday 02 April 2026 00:39:36 +0000 (0:00:00.229) 0:00:02.494 ******** 2026-04-02 00:39:42.892054 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-04-02 00:39:42.892084 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-04-02 00:39:42.892097 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-04-02 00:39:42.892108 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-04-02 00:39:42.892119 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-04-02 00:39:42.892129 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-04-02 00:39:42.892140 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-04-02 00:39:42.892151 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-04-02 00:39:42.892162 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-04-02 00:39:42.892173 | orchestrator | 2026-04-02 00:39:42.892184 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-04-02 00:39:42.892196 | orchestrator | Thursday 02 April 2026 00:39:37 +0000 (0:00:01.345) 0:00:03.840 ******** 2026-04-02 00:39:42.892206 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-04-02 00:39:42.892218 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-04-02 00:39:42.892228 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-04-02 00:39:42.892239 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-04-02 00:39:42.892250 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-04-02 00:39:42.892260 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-04-02 00:39:42.892271 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-04-02 00:39:42.892282 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-04-02 00:39:42.892293 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-04-02 00:39:42.892303 | orchestrator | 2026-04-02 00:39:42.892324 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-04-02 00:39:42.892335 | orchestrator | Thursday 02 April 2026 00:39:39 +0000 (0:00:01.351) 0:00:05.191 ******** 2026-04-02 00:39:42.892346 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-04-02 00:39:42.892356 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-04-02 00:39:42.892367 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-04-02 00:39:42.892378 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-04-02 00:39:42.892409 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-04-02 00:39:42.892422 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-04-02 00:39:42.892432 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-04-02 00:39:42.892443 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-04-02 00:39:42.892453 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-04-02 00:39:42.892464 | orchestrator | 2026-04-02 00:39:42.892475 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-04-02 00:39:42.892486 | orchestrator | Thursday 02 April 2026 00:39:41 +0000 (0:00:02.214) 0:00:07.405 ******** 2026-04-02 00:39:42.892497 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:39:42.892508 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:39:42.892519 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:39:42.892529 | orchestrator | 2026-04-02 00:39:42.892540 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-04-02 00:39:42.892551 | orchestrator | Thursday 02 April 2026 00:39:42 +0000 (0:00:00.595) 0:00:08.000 ******** 2026-04-02 00:39:42.892562 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:39:42.892572 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:39:42.892583 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:39:42.892595 | orchestrator | 2026-04-02 00:39:42.892606 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-02 00:39:42.892670 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-02 00:39:42.892686 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-02 00:39:42.892722 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-02 00:39:42.892733 | orchestrator | 2026-04-02 00:39:42.892744 | orchestrator | 2026-04-02 00:39:42.892755 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-02 00:39:42.892766 | orchestrator | Thursday 02 April 2026 00:39:42 +0000 (0:00:00.595) 0:00:08.596 ******** 2026-04-02 00:39:42.892777 | orchestrator | =============================================================================== 2026-04-02 00:39:42.892788 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.21s 2026-04-02 00:39:42.892798 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.35s 2026-04-02 00:39:42.892809 | orchestrator | Check device availability ----------------------------------------------- 1.35s 2026-04-02 00:39:42.892820 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 1.26s 2026-04-02 00:39:42.892831 | orchestrator | Reload udev rules ------------------------------------------------------- 0.60s 2026-04-02 00:39:42.892841 | orchestrator | Request device events from the kernel ----------------------------------- 0.60s 2026-04-02 00:39:42.892852 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.58s 2026-04-02 00:39:42.892863 | orchestrator | Remove all rook related logical devices --------------------------------- 0.26s 2026-04-02 00:39:42.892873 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.23s 2026-04-02 00:39:54.367890 | orchestrator | 2026-04-02 00:39:54 | INFO  | Prepare task for execution of facts. 2026-04-02 00:39:54.440984 | orchestrator | 2026-04-02 00:39:54 | INFO  | Task 5cf527eb-d2c3-4b1f-a5c5-2eb974e9614f (facts) was prepared for execution. 2026-04-02 00:39:54.441083 | orchestrator | 2026-04-02 00:39:54 | INFO  | It takes a moment until task 5cf527eb-d2c3-4b1f-a5c5-2eb974e9614f (facts) has been started and output is visible here. 2026-04-02 00:40:05.603914 | orchestrator | 2026-04-02 00:40:05.604016 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-04-02 00:40:05.604030 | orchestrator | 2026-04-02 00:40:05.604062 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-02 00:40:05.604073 | orchestrator | Thursday 02 April 2026 00:39:57 +0000 (0:00:00.235) 0:00:00.235 ******** 2026-04-02 00:40:05.604082 | orchestrator | ok: [testbed-manager] 2026-04-02 00:40:05.604092 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:40:05.604101 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:40:05.604110 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:40:05.604119 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:40:05.604128 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:40:05.604136 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:40:05.604145 | orchestrator | 2026-04-02 00:40:05.604155 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-02 00:40:05.604164 | orchestrator | Thursday 02 April 2026 00:39:58 +0000 (0:00:01.223) 0:00:01.458 ******** 2026-04-02 00:40:05.604173 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:40:05.604182 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:40:05.604191 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:40:05.604200 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:40:05.604209 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:40:05.604217 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:40:05.604226 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:40:05.604235 | orchestrator | 2026-04-02 00:40:05.604244 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-02 00:40:05.604268 | orchestrator | 2026-04-02 00:40:05.604277 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-02 00:40:05.604287 | orchestrator | Thursday 02 April 2026 00:39:59 +0000 (0:00:01.017) 0:00:02.476 ******** 2026-04-02 00:40:05.604300 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:40:05.604316 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:40:05.604331 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:40:05.604347 | orchestrator | ok: [testbed-manager] 2026-04-02 00:40:05.604362 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:40:05.604378 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:40:05.604395 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:40:05.604411 | orchestrator | 2026-04-02 00:40:05.604427 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-02 00:40:05.604441 | orchestrator | 2026-04-02 00:40:05.604450 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-02 00:40:05.604459 | orchestrator | Thursday 02 April 2026 00:40:04 +0000 (0:00:05.112) 0:00:07.588 ******** 2026-04-02 00:40:05.604468 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:40:05.604478 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:40:05.604488 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:40:05.604498 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:40:05.604507 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:40:05.604517 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:40:05.604527 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:40:05.604537 | orchestrator | 2026-04-02 00:40:05.604547 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-02 00:40:05.604558 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-02 00:40:05.604570 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-02 00:40:05.604580 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-02 00:40:05.604647 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-02 00:40:05.604657 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-02 00:40:05.604674 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-02 00:40:05.604683 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-02 00:40:05.604692 | orchestrator | 2026-04-02 00:40:05.604700 | orchestrator | 2026-04-02 00:40:05.604709 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-02 00:40:05.604724 | orchestrator | Thursday 02 April 2026 00:40:05 +0000 (0:00:00.483) 0:00:08.072 ******** 2026-04-02 00:40:05.604739 | orchestrator | =============================================================================== 2026-04-02 00:40:05.604755 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.11s 2026-04-02 00:40:05.604772 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.22s 2026-04-02 00:40:05.604788 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.02s 2026-04-02 00:40:05.604804 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.48s 2026-04-02 00:40:07.079633 | orchestrator | 2026-04-02 00:40:07 | INFO  | Prepare task for execution of ceph-configure-lvm-volumes. 2026-04-02 00:40:07.143347 | orchestrator | 2026-04-02 00:40:07 | INFO  | Task 6e351622-f55a-4646-a688-0ced90095250 (ceph-configure-lvm-volumes) was prepared for execution. 2026-04-02 00:40:07.143415 | orchestrator | 2026-04-02 00:40:07 | INFO  | It takes a moment until task 6e351622-f55a-4646-a688-0ced90095250 (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-04-02 00:40:18.115276 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-02 00:40:18.115386 | orchestrator | 2.16.14 2026-04-02 00:40:18.115403 | orchestrator | 2026-04-02 00:40:18.115415 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-04-02 00:40:18.115428 | orchestrator | 2026-04-02 00:40:18.115440 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-02 00:40:18.115451 | orchestrator | Thursday 02 April 2026 00:40:11 +0000 (0:00:00.280) 0:00:00.280 ******** 2026-04-02 00:40:18.115462 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-02 00:40:18.115474 | orchestrator | 2026-04-02 00:40:18.115484 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-02 00:40:18.115495 | orchestrator | Thursday 02 April 2026 00:40:11 +0000 (0:00:00.249) 0:00:00.530 ******** 2026-04-02 00:40:18.115507 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:40:18.115518 | orchestrator | 2026-04-02 00:40:18.115529 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-02 00:40:18.115540 | orchestrator | Thursday 02 April 2026 00:40:12 +0000 (0:00:00.229) 0:00:00.759 ******** 2026-04-02 00:40:18.115562 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-04-02 00:40:18.115672 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-04-02 00:40:18.115688 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-04-02 00:40:18.115699 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-04-02 00:40:18.115709 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-04-02 00:40:18.115720 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-04-02 00:40:18.115731 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-04-02 00:40:18.115741 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-04-02 00:40:18.115752 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-04-02 00:40:18.115762 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-04-02 00:40:18.115799 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-04-02 00:40:18.115811 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-04-02 00:40:18.115824 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-04-02 00:40:18.115836 | orchestrator | 2026-04-02 00:40:18.115849 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-02 00:40:18.115861 | orchestrator | Thursday 02 April 2026 00:40:12 +0000 (0:00:00.353) 0:00:01.113 ******** 2026-04-02 00:40:18.115873 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:40:18.115885 | orchestrator | 2026-04-02 00:40:18.115898 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-02 00:40:18.115910 | orchestrator | Thursday 02 April 2026 00:40:12 +0000 (0:00:00.420) 0:00:01.533 ******** 2026-04-02 00:40:18.115923 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:40:18.115936 | orchestrator | 2026-04-02 00:40:18.115949 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-02 00:40:18.115967 | orchestrator | Thursday 02 April 2026 00:40:13 +0000 (0:00:00.180) 0:00:01.714 ******** 2026-04-02 00:40:18.115981 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:40:18.115993 | orchestrator | 2026-04-02 00:40:18.116006 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-02 00:40:18.116020 | orchestrator | Thursday 02 April 2026 00:40:13 +0000 (0:00:00.168) 0:00:01.882 ******** 2026-04-02 00:40:18.116032 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:40:18.116045 | orchestrator | 2026-04-02 00:40:18.116057 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-02 00:40:18.116069 | orchestrator | Thursday 02 April 2026 00:40:13 +0000 (0:00:00.171) 0:00:02.054 ******** 2026-04-02 00:40:18.116083 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:40:18.116096 | orchestrator | 2026-04-02 00:40:18.116108 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-02 00:40:18.116120 | orchestrator | Thursday 02 April 2026 00:40:13 +0000 (0:00:00.171) 0:00:02.226 ******** 2026-04-02 00:40:18.116132 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:40:18.116145 | orchestrator | 2026-04-02 00:40:18.116157 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-02 00:40:18.116169 | orchestrator | Thursday 02 April 2026 00:40:13 +0000 (0:00:00.163) 0:00:02.389 ******** 2026-04-02 00:40:18.116180 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:40:18.116190 | orchestrator | 2026-04-02 00:40:18.116201 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-02 00:40:18.116212 | orchestrator | Thursday 02 April 2026 00:40:13 +0000 (0:00:00.169) 0:00:02.558 ******** 2026-04-02 00:40:18.116223 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:40:18.116234 | orchestrator | 2026-04-02 00:40:18.116245 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-02 00:40:18.116255 | orchestrator | Thursday 02 April 2026 00:40:14 +0000 (0:00:00.177) 0:00:02.735 ******** 2026-04-02 00:40:18.116266 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_f06db598-1059-4957-87c8-4c1fce10345d) 2026-04-02 00:40:18.116278 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_f06db598-1059-4957-87c8-4c1fce10345d) 2026-04-02 00:40:18.116289 | orchestrator | 2026-04-02 00:40:18.116300 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-02 00:40:18.116328 | orchestrator | Thursday 02 April 2026 00:40:14 +0000 (0:00:00.353) 0:00:03.089 ******** 2026-04-02 00:40:18.116340 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_2d38c850-3a2f-4695-a83c-0cf43f012ceb) 2026-04-02 00:40:18.116351 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_2d38c850-3a2f-4695-a83c-0cf43f012ceb) 2026-04-02 00:40:18.116362 | orchestrator | 2026-04-02 00:40:18.116379 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-02 00:40:18.116400 | orchestrator | Thursday 02 April 2026 00:40:14 +0000 (0:00:00.362) 0:00:03.451 ******** 2026-04-02 00:40:18.116411 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_a19da191-4981-42d2-9779-658e739bce45) 2026-04-02 00:40:18.116422 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_a19da191-4981-42d2-9779-658e739bce45) 2026-04-02 00:40:18.116433 | orchestrator | 2026-04-02 00:40:18.116443 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-02 00:40:18.116454 | orchestrator | Thursday 02 April 2026 00:40:15 +0000 (0:00:00.502) 0:00:03.954 ******** 2026-04-02 00:40:18.116465 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_e9e694a9-9d82-484d-8c29-c125fbbe1161) 2026-04-02 00:40:18.116476 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_e9e694a9-9d82-484d-8c29-c125fbbe1161) 2026-04-02 00:40:18.116487 | orchestrator | 2026-04-02 00:40:18.116498 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-02 00:40:18.116509 | orchestrator | Thursday 02 April 2026 00:40:15 +0000 (0:00:00.554) 0:00:04.508 ******** 2026-04-02 00:40:18.116520 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-02 00:40:18.116531 | orchestrator | 2026-04-02 00:40:18.116542 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-02 00:40:18.116553 | orchestrator | Thursday 02 April 2026 00:40:16 +0000 (0:00:00.560) 0:00:05.069 ******** 2026-04-02 00:40:18.116563 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-04-02 00:40:18.116596 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-04-02 00:40:18.116608 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-04-02 00:40:18.116619 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-04-02 00:40:18.116630 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-04-02 00:40:18.116640 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-04-02 00:40:18.116651 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-04-02 00:40:18.116661 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-04-02 00:40:18.116672 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-04-02 00:40:18.116683 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-04-02 00:40:18.116694 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-04-02 00:40:18.116705 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-04-02 00:40:18.116715 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-04-02 00:40:18.116726 | orchestrator | 2026-04-02 00:40:18.116737 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-02 00:40:18.116748 | orchestrator | Thursday 02 April 2026 00:40:16 +0000 (0:00:00.379) 0:00:05.448 ******** 2026-04-02 00:40:18.116758 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:40:18.116769 | orchestrator | 2026-04-02 00:40:18.116780 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-02 00:40:18.116790 | orchestrator | Thursday 02 April 2026 00:40:16 +0000 (0:00:00.176) 0:00:05.625 ******** 2026-04-02 00:40:18.116801 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:40:18.116812 | orchestrator | 2026-04-02 00:40:18.116822 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-02 00:40:18.116833 | orchestrator | Thursday 02 April 2026 00:40:17 +0000 (0:00:00.204) 0:00:05.829 ******** 2026-04-02 00:40:18.116843 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:40:18.116861 | orchestrator | 2026-04-02 00:40:18.116872 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-02 00:40:18.116883 | orchestrator | Thursday 02 April 2026 00:40:17 +0000 (0:00:00.181) 0:00:06.011 ******** 2026-04-02 00:40:18.116894 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:40:18.116904 | orchestrator | 2026-04-02 00:40:18.116915 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-02 00:40:18.116926 | orchestrator | Thursday 02 April 2026 00:40:17 +0000 (0:00:00.187) 0:00:06.198 ******** 2026-04-02 00:40:18.116936 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:40:18.116947 | orchestrator | 2026-04-02 00:40:18.116958 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-02 00:40:18.116969 | orchestrator | Thursday 02 April 2026 00:40:17 +0000 (0:00:00.200) 0:00:06.399 ******** 2026-04-02 00:40:18.116980 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:40:18.116990 | orchestrator | 2026-04-02 00:40:18.117001 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-02 00:40:18.117012 | orchestrator | Thursday 02 April 2026 00:40:17 +0000 (0:00:00.203) 0:00:06.603 ******** 2026-04-02 00:40:18.117023 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:40:18.117034 | orchestrator | 2026-04-02 00:40:18.117051 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-02 00:40:24.715761 | orchestrator | Thursday 02 April 2026 00:40:18 +0000 (0:00:00.207) 0:00:06.810 ******** 2026-04-02 00:40:24.715870 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:40:24.715886 | orchestrator | 2026-04-02 00:40:24.715898 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-02 00:40:24.715910 | orchestrator | Thursday 02 April 2026 00:40:18 +0000 (0:00:00.189) 0:00:07.000 ******** 2026-04-02 00:40:24.715922 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-04-02 00:40:24.715934 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-04-02 00:40:24.715945 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-04-02 00:40:24.715956 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-04-02 00:40:24.715967 | orchestrator | 2026-04-02 00:40:24.715979 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-02 00:40:24.716009 | orchestrator | Thursday 02 April 2026 00:40:19 +0000 (0:00:00.916) 0:00:07.916 ******** 2026-04-02 00:40:24.716020 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:40:24.716031 | orchestrator | 2026-04-02 00:40:24.716042 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-02 00:40:24.716054 | orchestrator | Thursday 02 April 2026 00:40:19 +0000 (0:00:00.188) 0:00:08.105 ******** 2026-04-02 00:40:24.716065 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:40:24.716076 | orchestrator | 2026-04-02 00:40:24.716087 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-02 00:40:24.716098 | orchestrator | Thursday 02 April 2026 00:40:19 +0000 (0:00:00.174) 0:00:08.279 ******** 2026-04-02 00:40:24.716109 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:40:24.716120 | orchestrator | 2026-04-02 00:40:24.716131 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-02 00:40:24.716142 | orchestrator | Thursday 02 April 2026 00:40:19 +0000 (0:00:00.179) 0:00:08.458 ******** 2026-04-02 00:40:24.716153 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:40:24.716164 | orchestrator | 2026-04-02 00:40:24.716175 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-04-02 00:40:24.716186 | orchestrator | Thursday 02 April 2026 00:40:19 +0000 (0:00:00.167) 0:00:08.626 ******** 2026-04-02 00:40:24.716197 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-04-02 00:40:24.716209 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-04-02 00:40:24.716219 | orchestrator | 2026-04-02 00:40:24.716230 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-04-02 00:40:24.716241 | orchestrator | Thursday 02 April 2026 00:40:20 +0000 (0:00:00.139) 0:00:08.765 ******** 2026-04-02 00:40:24.716276 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:40:24.716288 | orchestrator | 2026-04-02 00:40:24.716301 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-04-02 00:40:24.716315 | orchestrator | Thursday 02 April 2026 00:40:20 +0000 (0:00:00.114) 0:00:08.880 ******** 2026-04-02 00:40:24.716328 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:40:24.716341 | orchestrator | 2026-04-02 00:40:24.716353 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-04-02 00:40:24.716367 | orchestrator | Thursday 02 April 2026 00:40:20 +0000 (0:00:00.124) 0:00:09.004 ******** 2026-04-02 00:40:24.716380 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:40:24.716393 | orchestrator | 2026-04-02 00:40:24.716405 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-04-02 00:40:24.716418 | orchestrator | Thursday 02 April 2026 00:40:20 +0000 (0:00:00.104) 0:00:09.108 ******** 2026-04-02 00:40:24.716431 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:40:24.716444 | orchestrator | 2026-04-02 00:40:24.716456 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-04-02 00:40:24.716469 | orchestrator | Thursday 02 April 2026 00:40:20 +0000 (0:00:00.135) 0:00:09.244 ******** 2026-04-02 00:40:24.716482 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3f9aa46c-6044-534e-8fed-f8e8e1b6cabb'}}) 2026-04-02 00:40:24.716495 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c3a3e1f2-53da-5696-b7a3-d36d02964763'}}) 2026-04-02 00:40:24.716508 | orchestrator | 2026-04-02 00:40:24.716520 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-04-02 00:40:24.716533 | orchestrator | Thursday 02 April 2026 00:40:20 +0000 (0:00:00.138) 0:00:09.383 ******** 2026-04-02 00:40:24.716546 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3f9aa46c-6044-534e-8fed-f8e8e1b6cabb'}})  2026-04-02 00:40:24.716591 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c3a3e1f2-53da-5696-b7a3-d36d02964763'}})  2026-04-02 00:40:24.716610 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:40:24.716623 | orchestrator | 2026-04-02 00:40:24.716637 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-04-02 00:40:24.716650 | orchestrator | Thursday 02 April 2026 00:40:20 +0000 (0:00:00.124) 0:00:09.507 ******** 2026-04-02 00:40:24.716661 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3f9aa46c-6044-534e-8fed-f8e8e1b6cabb'}})  2026-04-02 00:40:24.716672 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c3a3e1f2-53da-5696-b7a3-d36d02964763'}})  2026-04-02 00:40:24.716683 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:40:24.716694 | orchestrator | 2026-04-02 00:40:24.716705 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-04-02 00:40:24.716716 | orchestrator | Thursday 02 April 2026 00:40:21 +0000 (0:00:00.241) 0:00:09.749 ******** 2026-04-02 00:40:24.716727 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3f9aa46c-6044-534e-8fed-f8e8e1b6cabb'}})  2026-04-02 00:40:24.716756 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c3a3e1f2-53da-5696-b7a3-d36d02964763'}})  2026-04-02 00:40:24.716768 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:40:24.716779 | orchestrator | 2026-04-02 00:40:24.716790 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-04-02 00:40:24.716801 | orchestrator | Thursday 02 April 2026 00:40:21 +0000 (0:00:00.127) 0:00:09.876 ******** 2026-04-02 00:40:24.716812 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:40:24.716823 | orchestrator | 2026-04-02 00:40:24.716834 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-04-02 00:40:24.716845 | orchestrator | Thursday 02 April 2026 00:40:21 +0000 (0:00:00.126) 0:00:10.003 ******** 2026-04-02 00:40:24.716856 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:40:24.716876 | orchestrator | 2026-04-02 00:40:24.716887 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-04-02 00:40:24.716898 | orchestrator | Thursday 02 April 2026 00:40:21 +0000 (0:00:00.102) 0:00:10.106 ******** 2026-04-02 00:40:24.716909 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:40:24.716921 | orchestrator | 2026-04-02 00:40:24.716932 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-04-02 00:40:24.716950 | orchestrator | Thursday 02 April 2026 00:40:21 +0000 (0:00:00.117) 0:00:10.223 ******** 2026-04-02 00:40:24.716969 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:40:24.716991 | orchestrator | 2026-04-02 00:40:24.717019 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-04-02 00:40:24.717038 | orchestrator | Thursday 02 April 2026 00:40:21 +0000 (0:00:00.117) 0:00:10.340 ******** 2026-04-02 00:40:24.717057 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:40:24.717076 | orchestrator | 2026-04-02 00:40:24.717095 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-04-02 00:40:24.717114 | orchestrator | Thursday 02 April 2026 00:40:21 +0000 (0:00:00.119) 0:00:10.459 ******** 2026-04-02 00:40:24.717133 | orchestrator | ok: [testbed-node-3] => { 2026-04-02 00:40:24.717152 | orchestrator |  "ceph_osd_devices": { 2026-04-02 00:40:24.717173 | orchestrator |  "sdb": { 2026-04-02 00:40:24.717193 | orchestrator |  "osd_lvm_uuid": "3f9aa46c-6044-534e-8fed-f8e8e1b6cabb" 2026-04-02 00:40:24.717213 | orchestrator |  }, 2026-04-02 00:40:24.717232 | orchestrator |  "sdc": { 2026-04-02 00:40:24.717254 | orchestrator |  "osd_lvm_uuid": "c3a3e1f2-53da-5696-b7a3-d36d02964763" 2026-04-02 00:40:24.717275 | orchestrator |  } 2026-04-02 00:40:24.717295 | orchestrator |  } 2026-04-02 00:40:24.717314 | orchestrator | } 2026-04-02 00:40:24.717333 | orchestrator | 2026-04-02 00:40:24.717353 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-04-02 00:40:24.717373 | orchestrator | Thursday 02 April 2026 00:40:21 +0000 (0:00:00.120) 0:00:10.580 ******** 2026-04-02 00:40:24.717394 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:40:24.717414 | orchestrator | 2026-04-02 00:40:24.717434 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-04-02 00:40:24.717455 | orchestrator | Thursday 02 April 2026 00:40:21 +0000 (0:00:00.116) 0:00:10.696 ******** 2026-04-02 00:40:24.717474 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:40:24.717495 | orchestrator | 2026-04-02 00:40:24.717515 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-04-02 00:40:24.717536 | orchestrator | Thursday 02 April 2026 00:40:22 +0000 (0:00:00.124) 0:00:10.820 ******** 2026-04-02 00:40:24.717557 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:40:24.717606 | orchestrator | 2026-04-02 00:40:24.717625 | orchestrator | TASK [Print configuration data] ************************************************ 2026-04-02 00:40:24.717644 | orchestrator | Thursday 02 April 2026 00:40:22 +0000 (0:00:00.108) 0:00:10.929 ******** 2026-04-02 00:40:24.717664 | orchestrator | changed: [testbed-node-3] => { 2026-04-02 00:40:24.717683 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-04-02 00:40:24.717703 | orchestrator |  "ceph_osd_devices": { 2026-04-02 00:40:24.717721 | orchestrator |  "sdb": { 2026-04-02 00:40:24.717741 | orchestrator |  "osd_lvm_uuid": "3f9aa46c-6044-534e-8fed-f8e8e1b6cabb" 2026-04-02 00:40:24.717760 | orchestrator |  }, 2026-04-02 00:40:24.717779 | orchestrator |  "sdc": { 2026-04-02 00:40:24.717799 | orchestrator |  "osd_lvm_uuid": "c3a3e1f2-53da-5696-b7a3-d36d02964763" 2026-04-02 00:40:24.717818 | orchestrator |  } 2026-04-02 00:40:24.717838 | orchestrator |  }, 2026-04-02 00:40:24.717857 | orchestrator |  "lvm_volumes": [ 2026-04-02 00:40:24.717876 | orchestrator |  { 2026-04-02 00:40:24.717895 | orchestrator |  "data": "osd-block-3f9aa46c-6044-534e-8fed-f8e8e1b6cabb", 2026-04-02 00:40:24.717914 | orchestrator |  "data_vg": "ceph-3f9aa46c-6044-534e-8fed-f8e8e1b6cabb" 2026-04-02 00:40:24.717949 | orchestrator |  }, 2026-04-02 00:40:24.717969 | orchestrator |  { 2026-04-02 00:40:24.717988 | orchestrator |  "data": "osd-block-c3a3e1f2-53da-5696-b7a3-d36d02964763", 2026-04-02 00:40:24.718007 | orchestrator |  "data_vg": "ceph-c3a3e1f2-53da-5696-b7a3-d36d02964763" 2026-04-02 00:40:24.718105 | orchestrator |  } 2026-04-02 00:40:24.718127 | orchestrator |  ] 2026-04-02 00:40:24.718147 | orchestrator |  } 2026-04-02 00:40:24.718168 | orchestrator | } 2026-04-02 00:40:24.718188 | orchestrator | 2026-04-02 00:40:24.718208 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-04-02 00:40:24.718274 | orchestrator | Thursday 02 April 2026 00:40:22 +0000 (0:00:00.181) 0:00:11.110 ******** 2026-04-02 00:40:24.718294 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-02 00:40:24.718315 | orchestrator | 2026-04-02 00:40:24.718336 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-04-02 00:40:24.718356 | orchestrator | 2026-04-02 00:40:24.718377 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-02 00:40:24.718397 | orchestrator | Thursday 02 April 2026 00:40:24 +0000 (0:00:01.863) 0:00:12.973 ******** 2026-04-02 00:40:24.718418 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-04-02 00:40:24.718438 | orchestrator | 2026-04-02 00:40:24.718459 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-02 00:40:24.718479 | orchestrator | Thursday 02 April 2026 00:40:24 +0000 (0:00:00.226) 0:00:13.200 ******** 2026-04-02 00:40:24.718500 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:40:24.718520 | orchestrator | 2026-04-02 00:40:24.718556 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-02 00:40:31.232257 | orchestrator | Thursday 02 April 2026 00:40:24 +0000 (0:00:00.213) 0:00:13.413 ******** 2026-04-02 00:40:31.232356 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-04-02 00:40:31.232372 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-04-02 00:40:31.232383 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-04-02 00:40:31.232394 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-04-02 00:40:31.232405 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-04-02 00:40:31.232416 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-04-02 00:40:31.232427 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-04-02 00:40:31.232442 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-04-02 00:40:31.232453 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-04-02 00:40:31.232464 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-04-02 00:40:31.232475 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-04-02 00:40:31.232486 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-04-02 00:40:31.232514 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-04-02 00:40:31.232526 | orchestrator | 2026-04-02 00:40:31.232538 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-02 00:40:31.232549 | orchestrator | Thursday 02 April 2026 00:40:25 +0000 (0:00:00.349) 0:00:13.762 ******** 2026-04-02 00:40:31.232611 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:40:31.232624 | orchestrator | 2026-04-02 00:40:31.232635 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-02 00:40:31.232646 | orchestrator | Thursday 02 April 2026 00:40:25 +0000 (0:00:00.176) 0:00:13.939 ******** 2026-04-02 00:40:31.232679 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:40:31.232690 | orchestrator | 2026-04-02 00:40:31.232701 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-02 00:40:31.232712 | orchestrator | Thursday 02 April 2026 00:40:25 +0000 (0:00:00.189) 0:00:14.129 ******** 2026-04-02 00:40:31.232723 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:40:31.232734 | orchestrator | 2026-04-02 00:40:31.232745 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-02 00:40:31.232756 | orchestrator | Thursday 02 April 2026 00:40:25 +0000 (0:00:00.151) 0:00:14.280 ******** 2026-04-02 00:40:31.232767 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:40:31.232777 | orchestrator | 2026-04-02 00:40:31.232788 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-02 00:40:31.232799 | orchestrator | Thursday 02 April 2026 00:40:25 +0000 (0:00:00.177) 0:00:14.458 ******** 2026-04-02 00:40:31.232810 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:40:31.232821 | orchestrator | 2026-04-02 00:40:31.232832 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-02 00:40:31.232842 | orchestrator | Thursday 02 April 2026 00:40:26 +0000 (0:00:00.432) 0:00:14.890 ******** 2026-04-02 00:40:31.232853 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:40:31.232864 | orchestrator | 2026-04-02 00:40:31.232875 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-02 00:40:31.232886 | orchestrator | Thursday 02 April 2026 00:40:26 +0000 (0:00:00.176) 0:00:15.067 ******** 2026-04-02 00:40:31.232897 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:40:31.232908 | orchestrator | 2026-04-02 00:40:31.232919 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-02 00:40:31.232929 | orchestrator | Thursday 02 April 2026 00:40:26 +0000 (0:00:00.179) 0:00:15.247 ******** 2026-04-02 00:40:31.232940 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:40:31.232951 | orchestrator | 2026-04-02 00:40:31.232962 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-02 00:40:31.232973 | orchestrator | Thursday 02 April 2026 00:40:26 +0000 (0:00:00.175) 0:00:15.423 ******** 2026-04-02 00:40:31.232984 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_0032d447-591d-4b7c-93ad-b7b900e6d05d) 2026-04-02 00:40:31.232997 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_0032d447-591d-4b7c-93ad-b7b900e6d05d) 2026-04-02 00:40:31.233008 | orchestrator | 2026-04-02 00:40:31.233019 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-02 00:40:31.233030 | orchestrator | Thursday 02 April 2026 00:40:27 +0000 (0:00:00.359) 0:00:15.782 ******** 2026-04-02 00:40:31.233041 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_9b931cae-7a06-4f63-bca1-6514ca0f11b4) 2026-04-02 00:40:31.233052 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_9b931cae-7a06-4f63-bca1-6514ca0f11b4) 2026-04-02 00:40:31.233063 | orchestrator | 2026-04-02 00:40:31.233074 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-02 00:40:31.233085 | orchestrator | Thursday 02 April 2026 00:40:27 +0000 (0:00:00.361) 0:00:16.144 ******** 2026-04-02 00:40:31.233096 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_5e29c7a5-f411-44d3-9f54-46e8ba073aaf) 2026-04-02 00:40:31.233107 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_5e29c7a5-f411-44d3-9f54-46e8ba073aaf) 2026-04-02 00:40:31.233118 | orchestrator | 2026-04-02 00:40:31.233129 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-02 00:40:31.233158 | orchestrator | Thursday 02 April 2026 00:40:27 +0000 (0:00:00.377) 0:00:16.522 ******** 2026-04-02 00:40:31.233170 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_70d9d3d1-3bd9-44af-99af-3d2d8f65a3c3) 2026-04-02 00:40:31.233181 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_70d9d3d1-3bd9-44af-99af-3d2d8f65a3c3) 2026-04-02 00:40:31.233192 | orchestrator | 2026-04-02 00:40:31.233211 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-02 00:40:31.233222 | orchestrator | Thursday 02 April 2026 00:40:28 +0000 (0:00:00.375) 0:00:16.898 ******** 2026-04-02 00:40:31.233233 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-02 00:40:31.233244 | orchestrator | 2026-04-02 00:40:31.233255 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-02 00:40:31.233266 | orchestrator | Thursday 02 April 2026 00:40:28 +0000 (0:00:00.313) 0:00:17.211 ******** 2026-04-02 00:40:31.233277 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-04-02 00:40:31.233287 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-04-02 00:40:31.233305 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-04-02 00:40:31.233316 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-04-02 00:40:31.233327 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-04-02 00:40:31.233338 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-04-02 00:40:31.233349 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-04-02 00:40:31.233360 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-04-02 00:40:31.233370 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-04-02 00:40:31.233381 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-04-02 00:40:31.233392 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-04-02 00:40:31.233402 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-04-02 00:40:31.233413 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-04-02 00:40:31.233424 | orchestrator | 2026-04-02 00:40:31.233435 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-02 00:40:31.233457 | orchestrator | Thursday 02 April 2026 00:40:28 +0000 (0:00:00.342) 0:00:17.554 ******** 2026-04-02 00:40:31.233468 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:40:31.233478 | orchestrator | 2026-04-02 00:40:31.233489 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-02 00:40:31.233500 | orchestrator | Thursday 02 April 2026 00:40:29 +0000 (0:00:00.170) 0:00:17.724 ******** 2026-04-02 00:40:31.233510 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:40:31.233521 | orchestrator | 2026-04-02 00:40:31.233532 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-02 00:40:31.233543 | orchestrator | Thursday 02 April 2026 00:40:29 +0000 (0:00:00.460) 0:00:18.185 ******** 2026-04-02 00:40:31.233569 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:40:31.233580 | orchestrator | 2026-04-02 00:40:31.233591 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-02 00:40:31.233602 | orchestrator | Thursday 02 April 2026 00:40:29 +0000 (0:00:00.185) 0:00:18.370 ******** 2026-04-02 00:40:31.233613 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:40:31.233624 | orchestrator | 2026-04-02 00:40:31.233635 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-02 00:40:31.233646 | orchestrator | Thursday 02 April 2026 00:40:29 +0000 (0:00:00.174) 0:00:18.545 ******** 2026-04-02 00:40:31.233656 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:40:31.233667 | orchestrator | 2026-04-02 00:40:31.233678 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-02 00:40:31.233688 | orchestrator | Thursday 02 April 2026 00:40:30 +0000 (0:00:00.174) 0:00:18.719 ******** 2026-04-02 00:40:31.233699 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:40:31.233717 | orchestrator | 2026-04-02 00:40:31.233728 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-02 00:40:31.233738 | orchestrator | Thursday 02 April 2026 00:40:30 +0000 (0:00:00.186) 0:00:18.906 ******** 2026-04-02 00:40:31.233749 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:40:31.233760 | orchestrator | 2026-04-02 00:40:31.233771 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-02 00:40:31.233781 | orchestrator | Thursday 02 April 2026 00:40:30 +0000 (0:00:00.182) 0:00:19.088 ******** 2026-04-02 00:40:31.233792 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:40:31.233803 | orchestrator | 2026-04-02 00:40:31.233813 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-02 00:40:31.233824 | orchestrator | Thursday 02 April 2026 00:40:30 +0000 (0:00:00.178) 0:00:19.267 ******** 2026-04-02 00:40:31.233835 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-04-02 00:40:31.233847 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-04-02 00:40:31.233858 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-04-02 00:40:31.233869 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-04-02 00:40:31.233879 | orchestrator | 2026-04-02 00:40:31.233890 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-02 00:40:31.233901 | orchestrator | Thursday 02 April 2026 00:40:31 +0000 (0:00:00.565) 0:00:19.832 ******** 2026-04-02 00:40:31.233912 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:40:36.727637 | orchestrator | 2026-04-02 00:40:36.727786 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-02 00:40:36.727797 | orchestrator | Thursday 02 April 2026 00:40:31 +0000 (0:00:00.166) 0:00:19.999 ******** 2026-04-02 00:40:36.727803 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:40:36.727809 | orchestrator | 2026-04-02 00:40:36.727814 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-02 00:40:36.727820 | orchestrator | Thursday 02 April 2026 00:40:31 +0000 (0:00:00.166) 0:00:20.165 ******** 2026-04-02 00:40:36.727825 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:40:36.727830 | orchestrator | 2026-04-02 00:40:36.727835 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-02 00:40:36.727840 | orchestrator | Thursday 02 April 2026 00:40:31 +0000 (0:00:00.169) 0:00:20.335 ******** 2026-04-02 00:40:36.727845 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:40:36.727850 | orchestrator | 2026-04-02 00:40:36.727855 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-04-02 00:40:36.727860 | orchestrator | Thursday 02 April 2026 00:40:31 +0000 (0:00:00.173) 0:00:20.509 ******** 2026-04-02 00:40:36.727865 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-04-02 00:40:36.727870 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-04-02 00:40:36.727875 | orchestrator | 2026-04-02 00:40:36.727880 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-04-02 00:40:36.727912 | orchestrator | Thursday 02 April 2026 00:40:32 +0000 (0:00:00.307) 0:00:20.816 ******** 2026-04-02 00:40:36.727917 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:40:36.727922 | orchestrator | 2026-04-02 00:40:36.727927 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-04-02 00:40:36.727932 | orchestrator | Thursday 02 April 2026 00:40:32 +0000 (0:00:00.149) 0:00:20.966 ******** 2026-04-02 00:40:36.727937 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:40:36.727942 | orchestrator | 2026-04-02 00:40:36.727946 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-04-02 00:40:36.727961 | orchestrator | Thursday 02 April 2026 00:40:32 +0000 (0:00:00.149) 0:00:21.115 ******** 2026-04-02 00:40:36.727967 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:40:36.727972 | orchestrator | 2026-04-02 00:40:36.727977 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-04-02 00:40:36.727981 | orchestrator | Thursday 02 April 2026 00:40:32 +0000 (0:00:00.106) 0:00:21.222 ******** 2026-04-02 00:40:36.728016 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:40:36.728023 | orchestrator | 2026-04-02 00:40:36.728028 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-04-02 00:40:36.728033 | orchestrator | Thursday 02 April 2026 00:40:32 +0000 (0:00:00.103) 0:00:21.326 ******** 2026-04-02 00:40:36.728038 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '88a5a1a0-9236-5c9d-8025-e39ec03fb505'}}) 2026-04-02 00:40:36.728044 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b27c5b00-4597-5124-934a-fd641c3feb65'}}) 2026-04-02 00:40:36.728048 | orchestrator | 2026-04-02 00:40:36.728053 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-04-02 00:40:36.728058 | orchestrator | Thursday 02 April 2026 00:40:32 +0000 (0:00:00.138) 0:00:21.464 ******** 2026-04-02 00:40:36.728064 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '88a5a1a0-9236-5c9d-8025-e39ec03fb505'}})  2026-04-02 00:40:36.728070 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b27c5b00-4597-5124-934a-fd641c3feb65'}})  2026-04-02 00:40:36.728075 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:40:36.728079 | orchestrator | 2026-04-02 00:40:36.728084 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-04-02 00:40:36.728089 | orchestrator | Thursday 02 April 2026 00:40:32 +0000 (0:00:00.122) 0:00:21.587 ******** 2026-04-02 00:40:36.728094 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '88a5a1a0-9236-5c9d-8025-e39ec03fb505'}})  2026-04-02 00:40:36.728098 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b27c5b00-4597-5124-934a-fd641c3feb65'}})  2026-04-02 00:40:36.728104 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:40:36.728108 | orchestrator | 2026-04-02 00:40:36.728113 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-04-02 00:40:36.728118 | orchestrator | Thursday 02 April 2026 00:40:33 +0000 (0:00:00.138) 0:00:21.725 ******** 2026-04-02 00:40:36.728123 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '88a5a1a0-9236-5c9d-8025-e39ec03fb505'}})  2026-04-02 00:40:36.728128 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b27c5b00-4597-5124-934a-fd641c3feb65'}})  2026-04-02 00:40:36.728132 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:40:36.728137 | orchestrator | 2026-04-02 00:40:36.728142 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-04-02 00:40:36.728147 | orchestrator | Thursday 02 April 2026 00:40:33 +0000 (0:00:00.164) 0:00:21.890 ******** 2026-04-02 00:40:36.728151 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:40:36.728156 | orchestrator | 2026-04-02 00:40:36.728161 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-04-02 00:40:36.728166 | orchestrator | Thursday 02 April 2026 00:40:33 +0000 (0:00:00.147) 0:00:22.037 ******** 2026-04-02 00:40:36.728171 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:40:36.728176 | orchestrator | 2026-04-02 00:40:36.728180 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-04-02 00:40:36.728185 | orchestrator | Thursday 02 April 2026 00:40:33 +0000 (0:00:00.124) 0:00:22.162 ******** 2026-04-02 00:40:36.728210 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:40:36.728216 | orchestrator | 2026-04-02 00:40:36.728221 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-04-02 00:40:36.728225 | orchestrator | Thursday 02 April 2026 00:40:33 +0000 (0:00:00.102) 0:00:22.264 ******** 2026-04-02 00:40:36.728230 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:40:36.728235 | orchestrator | 2026-04-02 00:40:36.728240 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-04-02 00:40:36.728244 | orchestrator | Thursday 02 April 2026 00:40:33 +0000 (0:00:00.247) 0:00:22.512 ******** 2026-04-02 00:40:36.728249 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:40:36.728259 | orchestrator | 2026-04-02 00:40:36.728264 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-04-02 00:40:36.728268 | orchestrator | Thursday 02 April 2026 00:40:33 +0000 (0:00:00.095) 0:00:22.607 ******** 2026-04-02 00:40:36.728273 | orchestrator | ok: [testbed-node-4] => { 2026-04-02 00:40:36.728278 | orchestrator |  "ceph_osd_devices": { 2026-04-02 00:40:36.728283 | orchestrator |  "sdb": { 2026-04-02 00:40:36.728289 | orchestrator |  "osd_lvm_uuid": "88a5a1a0-9236-5c9d-8025-e39ec03fb505" 2026-04-02 00:40:36.728294 | orchestrator |  }, 2026-04-02 00:40:36.728299 | orchestrator |  "sdc": { 2026-04-02 00:40:36.728303 | orchestrator |  "osd_lvm_uuid": "b27c5b00-4597-5124-934a-fd641c3feb65" 2026-04-02 00:40:36.728308 | orchestrator |  } 2026-04-02 00:40:36.728313 | orchestrator |  } 2026-04-02 00:40:36.728318 | orchestrator | } 2026-04-02 00:40:36.728323 | orchestrator | 2026-04-02 00:40:36.728328 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-04-02 00:40:36.728333 | orchestrator | Thursday 02 April 2026 00:40:34 +0000 (0:00:00.186) 0:00:22.794 ******** 2026-04-02 00:40:36.728337 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:40:36.728342 | orchestrator | 2026-04-02 00:40:36.728347 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-04-02 00:40:36.728352 | orchestrator | Thursday 02 April 2026 00:40:34 +0000 (0:00:00.157) 0:00:22.952 ******** 2026-04-02 00:40:36.728356 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:40:36.728361 | orchestrator | 2026-04-02 00:40:36.728366 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-04-02 00:40:36.728371 | orchestrator | Thursday 02 April 2026 00:40:34 +0000 (0:00:00.111) 0:00:23.063 ******** 2026-04-02 00:40:36.728375 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:40:36.728380 | orchestrator | 2026-04-02 00:40:36.728385 | orchestrator | TASK [Print configuration data] ************************************************ 2026-04-02 00:40:36.728394 | orchestrator | Thursday 02 April 2026 00:40:34 +0000 (0:00:00.108) 0:00:23.172 ******** 2026-04-02 00:40:36.728399 | orchestrator | changed: [testbed-node-4] => { 2026-04-02 00:40:36.728403 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-04-02 00:40:36.728408 | orchestrator |  "ceph_osd_devices": { 2026-04-02 00:40:36.728413 | orchestrator |  "sdb": { 2026-04-02 00:40:36.728418 | orchestrator |  "osd_lvm_uuid": "88a5a1a0-9236-5c9d-8025-e39ec03fb505" 2026-04-02 00:40:36.728423 | orchestrator |  }, 2026-04-02 00:40:36.728428 | orchestrator |  "sdc": { 2026-04-02 00:40:36.728433 | orchestrator |  "osd_lvm_uuid": "b27c5b00-4597-5124-934a-fd641c3feb65" 2026-04-02 00:40:36.728438 | orchestrator |  } 2026-04-02 00:40:36.728443 | orchestrator |  }, 2026-04-02 00:40:36.728447 | orchestrator |  "lvm_volumes": [ 2026-04-02 00:40:36.728452 | orchestrator |  { 2026-04-02 00:40:36.728457 | orchestrator |  "data": "osd-block-88a5a1a0-9236-5c9d-8025-e39ec03fb505", 2026-04-02 00:40:36.728462 | orchestrator |  "data_vg": "ceph-88a5a1a0-9236-5c9d-8025-e39ec03fb505" 2026-04-02 00:40:36.728467 | orchestrator |  }, 2026-04-02 00:40:36.728471 | orchestrator |  { 2026-04-02 00:40:36.728476 | orchestrator |  "data": "osd-block-b27c5b00-4597-5124-934a-fd641c3feb65", 2026-04-02 00:40:36.728481 | orchestrator |  "data_vg": "ceph-b27c5b00-4597-5124-934a-fd641c3feb65" 2026-04-02 00:40:36.728486 | orchestrator |  } 2026-04-02 00:40:36.728491 | orchestrator |  ] 2026-04-02 00:40:36.728495 | orchestrator |  } 2026-04-02 00:40:36.728500 | orchestrator | } 2026-04-02 00:40:36.728505 | orchestrator | 2026-04-02 00:40:36.728510 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-04-02 00:40:36.728516 | orchestrator | Thursday 02 April 2026 00:40:34 +0000 (0:00:00.159) 0:00:23.331 ******** 2026-04-02 00:40:36.728524 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-04-02 00:40:36.728532 | orchestrator | 2026-04-02 00:40:36.728566 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-04-02 00:40:36.728574 | orchestrator | 2026-04-02 00:40:36.728579 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-02 00:40:36.728584 | orchestrator | Thursday 02 April 2026 00:40:35 +0000 (0:00:00.941) 0:00:24.273 ******** 2026-04-02 00:40:36.728589 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-04-02 00:40:36.728594 | orchestrator | 2026-04-02 00:40:36.728599 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-02 00:40:36.728603 | orchestrator | Thursday 02 April 2026 00:40:36 +0000 (0:00:00.440) 0:00:24.713 ******** 2026-04-02 00:40:36.728608 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:40:36.728613 | orchestrator | 2026-04-02 00:40:36.728618 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-02 00:40:36.728622 | orchestrator | Thursday 02 April 2026 00:40:36 +0000 (0:00:00.454) 0:00:25.168 ******** 2026-04-02 00:40:36.728627 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-04-02 00:40:36.728632 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-04-02 00:40:36.728637 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-04-02 00:40:36.728642 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-04-02 00:40:36.728646 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-04-02 00:40:36.728656 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-04-02 00:40:43.988881 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-04-02 00:40:43.988998 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-04-02 00:40:43.989013 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-04-02 00:40:43.989025 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-04-02 00:40:43.989037 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-04-02 00:40:43.989047 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-04-02 00:40:43.989059 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-04-02 00:40:43.989071 | orchestrator | 2026-04-02 00:40:43.989084 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-02 00:40:43.989096 | orchestrator | Thursday 02 April 2026 00:40:36 +0000 (0:00:00.317) 0:00:25.486 ******** 2026-04-02 00:40:43.989107 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:40:43.989120 | orchestrator | 2026-04-02 00:40:43.989132 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-02 00:40:43.989144 | orchestrator | Thursday 02 April 2026 00:40:36 +0000 (0:00:00.166) 0:00:25.652 ******** 2026-04-02 00:40:43.989155 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:40:43.989166 | orchestrator | 2026-04-02 00:40:43.989176 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-02 00:40:43.989187 | orchestrator | Thursday 02 April 2026 00:40:37 +0000 (0:00:00.212) 0:00:25.864 ******** 2026-04-02 00:40:43.989198 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:40:43.989210 | orchestrator | 2026-04-02 00:40:43.989221 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-02 00:40:43.989233 | orchestrator | Thursday 02 April 2026 00:40:37 +0000 (0:00:00.168) 0:00:26.032 ******** 2026-04-02 00:40:43.989244 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:40:43.989254 | orchestrator | 2026-04-02 00:40:43.989261 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-02 00:40:43.989267 | orchestrator | Thursday 02 April 2026 00:40:37 +0000 (0:00:00.181) 0:00:26.214 ******** 2026-04-02 00:40:43.989294 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:40:43.989301 | orchestrator | 2026-04-02 00:40:43.989308 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-02 00:40:43.989314 | orchestrator | Thursday 02 April 2026 00:40:37 +0000 (0:00:00.166) 0:00:26.381 ******** 2026-04-02 00:40:43.989321 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:40:43.989327 | orchestrator | 2026-04-02 00:40:43.989334 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-02 00:40:43.989341 | orchestrator | Thursday 02 April 2026 00:40:37 +0000 (0:00:00.184) 0:00:26.566 ******** 2026-04-02 00:40:43.989347 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:40:43.989354 | orchestrator | 2026-04-02 00:40:43.989361 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-02 00:40:43.989367 | orchestrator | Thursday 02 April 2026 00:40:38 +0000 (0:00:00.190) 0:00:26.756 ******** 2026-04-02 00:40:43.989374 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:40:43.989381 | orchestrator | 2026-04-02 00:40:43.989387 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-02 00:40:43.989394 | orchestrator | Thursday 02 April 2026 00:40:38 +0000 (0:00:00.186) 0:00:26.942 ******** 2026-04-02 00:40:43.989401 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_213be582-731a-41c2-8309-28c1726af439) 2026-04-02 00:40:43.989408 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_213be582-731a-41c2-8309-28c1726af439) 2026-04-02 00:40:43.989415 | orchestrator | 2026-04-02 00:40:43.989422 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-02 00:40:43.989428 | orchestrator | Thursday 02 April 2026 00:40:38 +0000 (0:00:00.501) 0:00:27.444 ******** 2026-04-02 00:40:43.989450 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_3e41a6ee-2963-4f7f-bd44-4fc104801a1a) 2026-04-02 00:40:43.989457 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_3e41a6ee-2963-4f7f-bd44-4fc104801a1a) 2026-04-02 00:40:43.989468 | orchestrator | 2026-04-02 00:40:43.989480 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-02 00:40:43.989491 | orchestrator | Thursday 02 April 2026 00:40:39 +0000 (0:00:00.700) 0:00:28.144 ******** 2026-04-02 00:40:43.989502 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_313a743e-b82e-49a6-b933-92b7a7e896a9) 2026-04-02 00:40:43.989515 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_313a743e-b82e-49a6-b933-92b7a7e896a9) 2026-04-02 00:40:43.989526 | orchestrator | 2026-04-02 00:40:43.989561 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-02 00:40:43.989569 | orchestrator | Thursday 02 April 2026 00:40:39 +0000 (0:00:00.364) 0:00:28.509 ******** 2026-04-02 00:40:43.989575 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_01ed39e8-1eff-44a7-98b4-951368397b21) 2026-04-02 00:40:43.989582 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_01ed39e8-1eff-44a7-98b4-951368397b21) 2026-04-02 00:40:43.989589 | orchestrator | 2026-04-02 00:40:43.989595 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-02 00:40:43.989602 | orchestrator | Thursday 02 April 2026 00:40:40 +0000 (0:00:00.474) 0:00:28.983 ******** 2026-04-02 00:40:43.989608 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-02 00:40:43.989615 | orchestrator | 2026-04-02 00:40:43.989622 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-02 00:40:43.989642 | orchestrator | Thursday 02 April 2026 00:40:40 +0000 (0:00:00.317) 0:00:29.300 ******** 2026-04-02 00:40:43.989650 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-04-02 00:40:43.989657 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-04-02 00:40:43.989664 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-04-02 00:40:43.989671 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-04-02 00:40:43.989683 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-04-02 00:40:43.989690 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-04-02 00:40:43.989696 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-04-02 00:40:43.989703 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-04-02 00:40:43.989710 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-04-02 00:40:43.989716 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-04-02 00:40:43.989723 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-04-02 00:40:43.989729 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-04-02 00:40:43.989736 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-04-02 00:40:43.989742 | orchestrator | 2026-04-02 00:40:43.989749 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-02 00:40:43.989756 | orchestrator | Thursday 02 April 2026 00:40:40 +0000 (0:00:00.346) 0:00:29.647 ******** 2026-04-02 00:40:43.989762 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:40:43.989769 | orchestrator | 2026-04-02 00:40:43.989776 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-02 00:40:43.989782 | orchestrator | Thursday 02 April 2026 00:40:41 +0000 (0:00:00.186) 0:00:29.834 ******** 2026-04-02 00:40:43.989789 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:40:43.989795 | orchestrator | 2026-04-02 00:40:43.989802 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-02 00:40:43.989809 | orchestrator | Thursday 02 April 2026 00:40:41 +0000 (0:00:00.184) 0:00:30.018 ******** 2026-04-02 00:40:43.989815 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:40:43.989822 | orchestrator | 2026-04-02 00:40:43.989829 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-02 00:40:43.989835 | orchestrator | Thursday 02 April 2026 00:40:41 +0000 (0:00:00.186) 0:00:30.205 ******** 2026-04-02 00:40:43.989842 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:40:43.989849 | orchestrator | 2026-04-02 00:40:43.989855 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-02 00:40:43.989862 | orchestrator | Thursday 02 April 2026 00:40:41 +0000 (0:00:00.170) 0:00:30.376 ******** 2026-04-02 00:40:43.989868 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:40:43.989875 | orchestrator | 2026-04-02 00:40:43.989882 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-02 00:40:43.989888 | orchestrator | Thursday 02 April 2026 00:40:41 +0000 (0:00:00.180) 0:00:30.556 ******** 2026-04-02 00:40:43.989895 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:40:43.989902 | orchestrator | 2026-04-02 00:40:43.989908 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-02 00:40:43.989915 | orchestrator | Thursday 02 April 2026 00:40:42 +0000 (0:00:00.518) 0:00:31.075 ******** 2026-04-02 00:40:43.989921 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:40:43.989928 | orchestrator | 2026-04-02 00:40:43.989935 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-02 00:40:43.989941 | orchestrator | Thursday 02 April 2026 00:40:42 +0000 (0:00:00.186) 0:00:31.261 ******** 2026-04-02 00:40:43.989948 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:40:43.989954 | orchestrator | 2026-04-02 00:40:43.989961 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-02 00:40:43.989968 | orchestrator | Thursday 02 April 2026 00:40:42 +0000 (0:00:00.184) 0:00:31.446 ******** 2026-04-02 00:40:43.989974 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-04-02 00:40:43.989985 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-04-02 00:40:43.989992 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-04-02 00:40:43.989999 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-04-02 00:40:43.990006 | orchestrator | 2026-04-02 00:40:43.990012 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-02 00:40:43.990063 | orchestrator | Thursday 02 April 2026 00:40:43 +0000 (0:00:00.567) 0:00:32.013 ******** 2026-04-02 00:40:43.990070 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:40:43.990077 | orchestrator | 2026-04-02 00:40:43.990084 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-02 00:40:43.990091 | orchestrator | Thursday 02 April 2026 00:40:43 +0000 (0:00:00.180) 0:00:32.193 ******** 2026-04-02 00:40:43.990097 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:40:43.990104 | orchestrator | 2026-04-02 00:40:43.990110 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-02 00:40:43.990117 | orchestrator | Thursday 02 April 2026 00:40:43 +0000 (0:00:00.171) 0:00:32.365 ******** 2026-04-02 00:40:43.990124 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:40:43.990130 | orchestrator | 2026-04-02 00:40:43.990137 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-02 00:40:43.990144 | orchestrator | Thursday 02 April 2026 00:40:43 +0000 (0:00:00.163) 0:00:32.529 ******** 2026-04-02 00:40:43.990150 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:40:43.990157 | orchestrator | 2026-04-02 00:40:43.990168 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-04-02 00:40:47.747154 | orchestrator | Thursday 02 April 2026 00:40:43 +0000 (0:00:00.157) 0:00:32.686 ******** 2026-04-02 00:40:47.747237 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-04-02 00:40:47.747246 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-04-02 00:40:47.747253 | orchestrator | 2026-04-02 00:40:47.747259 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-04-02 00:40:47.747266 | orchestrator | Thursday 02 April 2026 00:40:44 +0000 (0:00:00.143) 0:00:32.830 ******** 2026-04-02 00:40:47.747272 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:40:47.747279 | orchestrator | 2026-04-02 00:40:47.747285 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-04-02 00:40:47.747291 | orchestrator | Thursday 02 April 2026 00:40:44 +0000 (0:00:00.138) 0:00:32.968 ******** 2026-04-02 00:40:47.747313 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:40:47.747320 | orchestrator | 2026-04-02 00:40:47.747326 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-04-02 00:40:47.747332 | orchestrator | Thursday 02 April 2026 00:40:44 +0000 (0:00:00.150) 0:00:33.119 ******** 2026-04-02 00:40:47.747338 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:40:47.747344 | orchestrator | 2026-04-02 00:40:47.747350 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-04-02 00:40:47.747356 | orchestrator | Thursday 02 April 2026 00:40:44 +0000 (0:00:00.139) 0:00:33.258 ******** 2026-04-02 00:40:47.747362 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:40:47.747369 | orchestrator | 2026-04-02 00:40:47.747375 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-04-02 00:40:47.747381 | orchestrator | Thursday 02 April 2026 00:40:44 +0000 (0:00:00.344) 0:00:33.603 ******** 2026-04-02 00:40:47.747387 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ce3dc94c-dd22-5089-bd64-d73b3d29d8ba'}}) 2026-04-02 00:40:47.747397 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'bc329f0f-76ef-5b6a-a482-1349b51ce957'}}) 2026-04-02 00:40:47.747403 | orchestrator | 2026-04-02 00:40:47.747409 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-04-02 00:40:47.747415 | orchestrator | Thursday 02 April 2026 00:40:45 +0000 (0:00:00.145) 0:00:33.748 ******** 2026-04-02 00:40:47.747421 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ce3dc94c-dd22-5089-bd64-d73b3d29d8ba'}})  2026-04-02 00:40:47.747444 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'bc329f0f-76ef-5b6a-a482-1349b51ce957'}})  2026-04-02 00:40:47.747451 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:40:47.747457 | orchestrator | 2026-04-02 00:40:47.747463 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-04-02 00:40:47.747469 | orchestrator | Thursday 02 April 2026 00:40:45 +0000 (0:00:00.129) 0:00:33.878 ******** 2026-04-02 00:40:47.747474 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ce3dc94c-dd22-5089-bd64-d73b3d29d8ba'}})  2026-04-02 00:40:47.747480 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'bc329f0f-76ef-5b6a-a482-1349b51ce957'}})  2026-04-02 00:40:47.747486 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:40:47.747492 | orchestrator | 2026-04-02 00:40:47.747498 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-04-02 00:40:47.747504 | orchestrator | Thursday 02 April 2026 00:40:45 +0000 (0:00:00.135) 0:00:34.013 ******** 2026-04-02 00:40:47.747509 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ce3dc94c-dd22-5089-bd64-d73b3d29d8ba'}})  2026-04-02 00:40:47.747515 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'bc329f0f-76ef-5b6a-a482-1349b51ce957'}})  2026-04-02 00:40:47.747521 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:40:47.747527 | orchestrator | 2026-04-02 00:40:47.747582 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-04-02 00:40:47.747588 | orchestrator | Thursday 02 April 2026 00:40:45 +0000 (0:00:00.134) 0:00:34.147 ******** 2026-04-02 00:40:47.747594 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:40:47.747600 | orchestrator | 2026-04-02 00:40:47.747606 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-04-02 00:40:47.747612 | orchestrator | Thursday 02 April 2026 00:40:45 +0000 (0:00:00.123) 0:00:34.271 ******** 2026-04-02 00:40:47.747618 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:40:47.747624 | orchestrator | 2026-04-02 00:40:47.747629 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-04-02 00:40:47.747635 | orchestrator | Thursday 02 April 2026 00:40:45 +0000 (0:00:00.106) 0:00:34.377 ******** 2026-04-02 00:40:47.747641 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:40:47.747647 | orchestrator | 2026-04-02 00:40:47.747653 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-04-02 00:40:47.747659 | orchestrator | Thursday 02 April 2026 00:40:45 +0000 (0:00:00.105) 0:00:34.482 ******** 2026-04-02 00:40:47.747667 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:40:47.747677 | orchestrator | 2026-04-02 00:40:47.747686 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-04-02 00:40:47.747696 | orchestrator | Thursday 02 April 2026 00:40:45 +0000 (0:00:00.111) 0:00:34.594 ******** 2026-04-02 00:40:47.747705 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:40:47.747715 | orchestrator | 2026-04-02 00:40:47.747722 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-04-02 00:40:47.747729 | orchestrator | Thursday 02 April 2026 00:40:46 +0000 (0:00:00.131) 0:00:34.726 ******** 2026-04-02 00:40:47.747736 | orchestrator | ok: [testbed-node-5] => { 2026-04-02 00:40:47.747745 | orchestrator |  "ceph_osd_devices": { 2026-04-02 00:40:47.747757 | orchestrator |  "sdb": { 2026-04-02 00:40:47.747780 | orchestrator |  "osd_lvm_uuid": "ce3dc94c-dd22-5089-bd64-d73b3d29d8ba" 2026-04-02 00:40:47.747787 | orchestrator |  }, 2026-04-02 00:40:47.747794 | orchestrator |  "sdc": { 2026-04-02 00:40:47.747801 | orchestrator |  "osd_lvm_uuid": "bc329f0f-76ef-5b6a-a482-1349b51ce957" 2026-04-02 00:40:47.747807 | orchestrator |  } 2026-04-02 00:40:47.747814 | orchestrator |  } 2026-04-02 00:40:47.747821 | orchestrator | } 2026-04-02 00:40:47.747828 | orchestrator | 2026-04-02 00:40:47.747840 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-04-02 00:40:47.747847 | orchestrator | Thursday 02 April 2026 00:40:46 +0000 (0:00:00.201) 0:00:34.928 ******** 2026-04-02 00:40:47.747856 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:40:47.747867 | orchestrator | 2026-04-02 00:40:47.747878 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-04-02 00:40:47.747886 | orchestrator | Thursday 02 April 2026 00:40:46 +0000 (0:00:00.101) 0:00:35.029 ******** 2026-04-02 00:40:47.747893 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:40:47.747900 | orchestrator | 2026-04-02 00:40:47.747907 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-04-02 00:40:47.747914 | orchestrator | Thursday 02 April 2026 00:40:46 +0000 (0:00:00.245) 0:00:35.274 ******** 2026-04-02 00:40:47.747921 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:40:47.747927 | orchestrator | 2026-04-02 00:40:47.747934 | orchestrator | TASK [Print configuration data] ************************************************ 2026-04-02 00:40:47.747941 | orchestrator | Thursday 02 April 2026 00:40:46 +0000 (0:00:00.139) 0:00:35.414 ******** 2026-04-02 00:40:47.747947 | orchestrator | changed: [testbed-node-5] => { 2026-04-02 00:40:47.747954 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-04-02 00:40:47.747961 | orchestrator |  "ceph_osd_devices": { 2026-04-02 00:40:47.747968 | orchestrator |  "sdb": { 2026-04-02 00:40:47.747975 | orchestrator |  "osd_lvm_uuid": "ce3dc94c-dd22-5089-bd64-d73b3d29d8ba" 2026-04-02 00:40:47.747982 | orchestrator |  }, 2026-04-02 00:40:47.747989 | orchestrator |  "sdc": { 2026-04-02 00:40:47.747996 | orchestrator |  "osd_lvm_uuid": "bc329f0f-76ef-5b6a-a482-1349b51ce957" 2026-04-02 00:40:47.748003 | orchestrator |  } 2026-04-02 00:40:47.748009 | orchestrator |  }, 2026-04-02 00:40:47.748016 | orchestrator |  "lvm_volumes": [ 2026-04-02 00:40:47.748023 | orchestrator |  { 2026-04-02 00:40:47.748029 | orchestrator |  "data": "osd-block-ce3dc94c-dd22-5089-bd64-d73b3d29d8ba", 2026-04-02 00:40:47.748036 | orchestrator |  "data_vg": "ceph-ce3dc94c-dd22-5089-bd64-d73b3d29d8ba" 2026-04-02 00:40:47.748043 | orchestrator |  }, 2026-04-02 00:40:47.748053 | orchestrator |  { 2026-04-02 00:40:47.748060 | orchestrator |  "data": "osd-block-bc329f0f-76ef-5b6a-a482-1349b51ce957", 2026-04-02 00:40:47.748065 | orchestrator |  "data_vg": "ceph-bc329f0f-76ef-5b6a-a482-1349b51ce957" 2026-04-02 00:40:47.748071 | orchestrator |  } 2026-04-02 00:40:47.748077 | orchestrator |  ] 2026-04-02 00:40:47.748083 | orchestrator |  } 2026-04-02 00:40:47.748089 | orchestrator | } 2026-04-02 00:40:47.748094 | orchestrator | 2026-04-02 00:40:47.748100 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-04-02 00:40:47.748106 | orchestrator | Thursday 02 April 2026 00:40:47 +0000 (0:00:00.294) 0:00:35.709 ******** 2026-04-02 00:40:47.748112 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-04-02 00:40:47.748117 | orchestrator | 2026-04-02 00:40:47.748123 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-02 00:40:47.748129 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-02 00:40:47.748176 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-02 00:40:47.748182 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-02 00:40:47.748188 | orchestrator | 2026-04-02 00:40:47.748194 | orchestrator | 2026-04-02 00:40:47.748200 | orchestrator | 2026-04-02 00:40:47.748205 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-02 00:40:47.748211 | orchestrator | Thursday 02 April 2026 00:40:47 +0000 (0:00:00.731) 0:00:36.441 ******** 2026-04-02 00:40:47.748222 | orchestrator | =============================================================================== 2026-04-02 00:40:47.748228 | orchestrator | Write configuration file ------------------------------------------------ 3.54s 2026-04-02 00:40:47.748234 | orchestrator | Add known partitions to the list of available block devices ------------- 1.07s 2026-04-02 00:40:47.748244 | orchestrator | Add known links to the list of available block devices ------------------ 1.02s 2026-04-02 00:40:47.748251 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.92s 2026-04-02 00:40:47.748256 | orchestrator | Add known partitions to the list of available block devices ------------- 0.92s 2026-04-02 00:40:47.748262 | orchestrator | Get initial list of available block devices ----------------------------- 0.90s 2026-04-02 00:40:47.748268 | orchestrator | Add known links to the list of available block devices ------------------ 0.70s 2026-04-02 00:40:47.748274 | orchestrator | Print configuration data ------------------------------------------------ 0.64s 2026-04-02 00:40:47.748280 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.59s 2026-04-02 00:40:47.748286 | orchestrator | Define lvm_volumes structures ------------------------------------------- 0.58s 2026-04-02 00:40:47.748292 | orchestrator | Add known partitions to the list of available block devices ------------- 0.57s 2026-04-02 00:40:47.748297 | orchestrator | Add known partitions to the list of available block devices ------------- 0.57s 2026-04-02 00:40:47.748303 | orchestrator | Add known links to the list of available block devices ------------------ 0.56s 2026-04-02 00:40:47.748314 | orchestrator | Add known links to the list of available block devices ------------------ 0.55s 2026-04-02 00:40:47.956197 | orchestrator | Add known partitions to the list of available block devices ------------- 0.52s 2026-04-02 00:40:47.956302 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.52s 2026-04-02 00:40:47.956317 | orchestrator | Print ceph_osd_devices -------------------------------------------------- 0.51s 2026-04-02 00:40:47.956329 | orchestrator | Add known links to the list of available block devices ------------------ 0.50s 2026-04-02 00:40:47.956341 | orchestrator | Add known links to the list of available block devices ------------------ 0.50s 2026-04-02 00:40:47.956351 | orchestrator | Print DB devices -------------------------------------------------------- 0.48s 2026-04-02 00:41:09.549629 | orchestrator | 2026-04-02 00:41:09 | INFO  | Task 0bae2d86-e56f-46c5-8c12-e9b245e6b959 (sync inventory) is running in background. Output coming soon. 2026-04-02 00:41:36.657433 | orchestrator | 2026-04-02 00:41:11 | INFO  | Starting group_vars file reorganization 2026-04-02 00:41:36.657583 | orchestrator | 2026-04-02 00:41:11 | INFO  | Moved 0 file(s) to their respective directories 2026-04-02 00:41:36.657601 | orchestrator | 2026-04-02 00:41:11 | INFO  | Group_vars file reorganization completed 2026-04-02 00:41:36.657612 | orchestrator | 2026-04-02 00:41:13 | INFO  | Starting variable preparation from inventory 2026-04-02 00:41:36.657628 | orchestrator | 2026-04-02 00:41:16 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-04-02 00:41:36.657645 | orchestrator | 2026-04-02 00:41:16 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-04-02 00:41:36.657685 | orchestrator | 2026-04-02 00:41:16 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-04-02 00:41:36.657704 | orchestrator | 2026-04-02 00:41:16 | INFO  | 3 file(s) written, 6 host(s) processed 2026-04-02 00:41:36.657717 | orchestrator | 2026-04-02 00:41:16 | INFO  | Variable preparation completed 2026-04-02 00:41:36.657728 | orchestrator | 2026-04-02 00:41:17 | INFO  | Starting inventory overwrite handling 2026-04-02 00:41:36.657738 | orchestrator | 2026-04-02 00:41:17 | INFO  | Handling group overwrites in 99-overwrite 2026-04-02 00:41:36.657748 | orchestrator | 2026-04-02 00:41:17 | INFO  | Removing group frr:children from 60-generic 2026-04-02 00:41:36.657779 | orchestrator | 2026-04-02 00:41:17 | INFO  | Removing group netbird:children from 50-infrastructure 2026-04-02 00:41:36.657790 | orchestrator | 2026-04-02 00:41:17 | INFO  | Removing group ceph-mds from 50-ceph 2026-04-02 00:41:36.657800 | orchestrator | 2026-04-02 00:41:17 | INFO  | Removing group ceph-rgw from 50-ceph 2026-04-02 00:41:36.657809 | orchestrator | 2026-04-02 00:41:17 | INFO  | Handling group overwrites in 20-roles 2026-04-02 00:41:36.657819 | orchestrator | 2026-04-02 00:41:17 | INFO  | Removing group k3s_node from 50-infrastructure 2026-04-02 00:41:36.657829 | orchestrator | 2026-04-02 00:41:17 | INFO  | Removed 5 group(s) in total 2026-04-02 00:41:36.657838 | orchestrator | 2026-04-02 00:41:17 | INFO  | Inventory overwrite handling completed 2026-04-02 00:41:36.657848 | orchestrator | 2026-04-02 00:41:18 | INFO  | Starting merge of inventory files 2026-04-02 00:41:36.657858 | orchestrator | 2026-04-02 00:41:18 | INFO  | Inventory files merged successfully 2026-04-02 00:41:36.657867 | orchestrator | 2026-04-02 00:41:22 | INFO  | Generating minified hosts file 2026-04-02 00:41:36.657877 | orchestrator | 2026-04-02 00:41:24 | INFO  | Successfully wrote minified hosts file to /inventory.merge/hosts-minified.yml 2026-04-02 00:41:36.657888 | orchestrator | 2026-04-02 00:41:24 | INFO  | Successfully wrote fast inventory to /inventory.merge/fast/hosts.json 2026-04-02 00:41:36.657897 | orchestrator | 2026-04-02 00:41:25 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-04-02 00:41:36.657907 | orchestrator | 2026-04-02 00:41:35 | INFO  | Successfully wrote ClusterShell configuration 2026-04-02 00:41:36.657917 | orchestrator | [master 0423f65] 2026-04-02-00-41 2026-04-02 00:41:36.657928 | orchestrator | 5 files changed, 75 insertions(+), 10 deletions(-) 2026-04-02 00:41:36.657938 | orchestrator | create mode 100644 fast/host_vars/testbed-node-3/ceph-lvm-configuration.yml 2026-04-02 00:41:36.657948 | orchestrator | create mode 100644 fast/host_vars/testbed-node-4/ceph-lvm-configuration.yml 2026-04-02 00:41:36.657958 | orchestrator | create mode 100644 fast/host_vars/testbed-node-5/ceph-lvm-configuration.yml 2026-04-02 00:41:38.058281 | orchestrator | 2026-04-02 00:41:38 | INFO  | Prepare task for execution of ceph-create-lvm-devices. 2026-04-02 00:41:38.118842 | orchestrator | 2026-04-02 00:41:38 | INFO  | Task 3e56ad56-9607-489d-a335-0c1f4d1666a9 (ceph-create-lvm-devices) was prepared for execution. 2026-04-02 00:41:38.118963 | orchestrator | 2026-04-02 00:41:38 | INFO  | It takes a moment until task 3e56ad56-9607-489d-a335-0c1f4d1666a9 (ceph-create-lvm-devices) has been started and output is visible here. 2026-04-02 00:41:49.334624 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-02 00:41:49.334785 | orchestrator | 2.16.14 2026-04-02 00:41:49.334817 | orchestrator | 2026-04-02 00:41:49.334838 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-04-02 00:41:49.334860 | orchestrator | 2026-04-02 00:41:49.334881 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-02 00:41:49.334899 | orchestrator | Thursday 02 April 2026 00:41:42 +0000 (0:00:00.246) 0:00:00.246 ******** 2026-04-02 00:41:49.334910 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-02 00:41:49.334921 | orchestrator | 2026-04-02 00:41:49.334932 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-02 00:41:49.334943 | orchestrator | Thursday 02 April 2026 00:41:42 +0000 (0:00:00.216) 0:00:00.463 ******** 2026-04-02 00:41:49.334955 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:41:49.334965 | orchestrator | 2026-04-02 00:41:49.334976 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-02 00:41:49.334987 | orchestrator | Thursday 02 April 2026 00:41:42 +0000 (0:00:00.198) 0:00:00.662 ******** 2026-04-02 00:41:49.335023 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-04-02 00:41:49.335035 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-04-02 00:41:49.335046 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-04-02 00:41:49.335057 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-04-02 00:41:49.335067 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-04-02 00:41:49.335079 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-04-02 00:41:49.335090 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-04-02 00:41:49.335101 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-04-02 00:41:49.335112 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-04-02 00:41:49.335123 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-04-02 00:41:49.335134 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-04-02 00:41:49.335144 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-04-02 00:41:49.335155 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-04-02 00:41:49.335166 | orchestrator | 2026-04-02 00:41:49.335176 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-02 00:41:49.335187 | orchestrator | Thursday 02 April 2026 00:41:43 +0000 (0:00:00.389) 0:00:01.051 ******** 2026-04-02 00:41:49.335198 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:41:49.335209 | orchestrator | 2026-04-02 00:41:49.335220 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-02 00:41:49.335231 | orchestrator | Thursday 02 April 2026 00:41:43 +0000 (0:00:00.357) 0:00:01.409 ******** 2026-04-02 00:41:49.335241 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:41:49.335252 | orchestrator | 2026-04-02 00:41:49.335263 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-02 00:41:49.335273 | orchestrator | Thursday 02 April 2026 00:41:43 +0000 (0:00:00.177) 0:00:01.586 ******** 2026-04-02 00:41:49.335302 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:41:49.335314 | orchestrator | 2026-04-02 00:41:49.335324 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-02 00:41:49.335335 | orchestrator | Thursday 02 April 2026 00:41:44 +0000 (0:00:00.173) 0:00:01.760 ******** 2026-04-02 00:41:49.335346 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:41:49.335356 | orchestrator | 2026-04-02 00:41:49.335367 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-02 00:41:49.335378 | orchestrator | Thursday 02 April 2026 00:41:44 +0000 (0:00:00.174) 0:00:01.935 ******** 2026-04-02 00:41:49.335388 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:41:49.335399 | orchestrator | 2026-04-02 00:41:49.335410 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-02 00:41:49.335421 | orchestrator | Thursday 02 April 2026 00:41:44 +0000 (0:00:00.176) 0:00:02.111 ******** 2026-04-02 00:41:49.335431 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:41:49.335442 | orchestrator | 2026-04-02 00:41:49.335453 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-02 00:41:49.335464 | orchestrator | Thursday 02 April 2026 00:41:44 +0000 (0:00:00.163) 0:00:02.274 ******** 2026-04-02 00:41:49.335497 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:41:49.335508 | orchestrator | 2026-04-02 00:41:49.335519 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-02 00:41:49.335529 | orchestrator | Thursday 02 April 2026 00:41:44 +0000 (0:00:00.154) 0:00:02.428 ******** 2026-04-02 00:41:49.335540 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:41:49.335559 | orchestrator | 2026-04-02 00:41:49.335570 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-02 00:41:49.335580 | orchestrator | Thursday 02 April 2026 00:41:44 +0000 (0:00:00.174) 0:00:02.602 ******** 2026-04-02 00:41:49.335591 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_f06db598-1059-4957-87c8-4c1fce10345d) 2026-04-02 00:41:49.335603 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_f06db598-1059-4957-87c8-4c1fce10345d) 2026-04-02 00:41:49.335614 | orchestrator | 2026-04-02 00:41:49.335624 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-02 00:41:49.335655 | orchestrator | Thursday 02 April 2026 00:41:45 +0000 (0:00:00.407) 0:00:03.010 ******** 2026-04-02 00:41:49.335667 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_2d38c850-3a2f-4695-a83c-0cf43f012ceb) 2026-04-02 00:41:49.335678 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_2d38c850-3a2f-4695-a83c-0cf43f012ceb) 2026-04-02 00:41:49.335688 | orchestrator | 2026-04-02 00:41:49.335699 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-02 00:41:49.335710 | orchestrator | Thursday 02 April 2026 00:41:45 +0000 (0:00:00.376) 0:00:03.386 ******** 2026-04-02 00:41:49.335721 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_a19da191-4981-42d2-9779-658e739bce45) 2026-04-02 00:41:49.335731 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_a19da191-4981-42d2-9779-658e739bce45) 2026-04-02 00:41:49.335742 | orchestrator | 2026-04-02 00:41:49.335753 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-02 00:41:49.335763 | orchestrator | Thursday 02 April 2026 00:41:46 +0000 (0:00:00.506) 0:00:03.893 ******** 2026-04-02 00:41:49.335774 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_e9e694a9-9d82-484d-8c29-c125fbbe1161) 2026-04-02 00:41:49.335785 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_e9e694a9-9d82-484d-8c29-c125fbbe1161) 2026-04-02 00:41:49.335795 | orchestrator | 2026-04-02 00:41:49.335806 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-02 00:41:49.335817 | orchestrator | Thursday 02 April 2026 00:41:46 +0000 (0:00:00.637) 0:00:04.530 ******** 2026-04-02 00:41:49.335827 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-02 00:41:49.335838 | orchestrator | 2026-04-02 00:41:49.335848 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-02 00:41:49.335865 | orchestrator | Thursday 02 April 2026 00:41:47 +0000 (0:00:00.690) 0:00:05.221 ******** 2026-04-02 00:41:49.335876 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-04-02 00:41:49.335887 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-04-02 00:41:49.335898 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-04-02 00:41:49.335909 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-04-02 00:41:49.335919 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-04-02 00:41:49.335930 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-04-02 00:41:49.335941 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-04-02 00:41:49.335951 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-04-02 00:41:49.335962 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-04-02 00:41:49.335972 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-04-02 00:41:49.335983 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-04-02 00:41:49.335993 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-04-02 00:41:49.336011 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-04-02 00:41:49.336022 | orchestrator | 2026-04-02 00:41:49.336033 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-02 00:41:49.336044 | orchestrator | Thursday 02 April 2026 00:41:47 +0000 (0:00:00.430) 0:00:05.651 ******** 2026-04-02 00:41:49.336054 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:41:49.336065 | orchestrator | 2026-04-02 00:41:49.336076 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-02 00:41:49.336086 | orchestrator | Thursday 02 April 2026 00:41:48 +0000 (0:00:00.183) 0:00:05.835 ******** 2026-04-02 00:41:49.336097 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:41:49.336130 | orchestrator | 2026-04-02 00:41:49.336141 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-02 00:41:49.336152 | orchestrator | Thursday 02 April 2026 00:41:48 +0000 (0:00:00.202) 0:00:06.038 ******** 2026-04-02 00:41:49.336162 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:41:49.336173 | orchestrator | 2026-04-02 00:41:49.336183 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-02 00:41:49.336194 | orchestrator | Thursday 02 April 2026 00:41:48 +0000 (0:00:00.214) 0:00:06.252 ******** 2026-04-02 00:41:49.336204 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:41:49.336215 | orchestrator | 2026-04-02 00:41:49.336226 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-02 00:41:49.336236 | orchestrator | Thursday 02 April 2026 00:41:48 +0000 (0:00:00.192) 0:00:06.445 ******** 2026-04-02 00:41:49.336247 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:41:49.336258 | orchestrator | 2026-04-02 00:41:49.336268 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-02 00:41:49.336279 | orchestrator | Thursday 02 April 2026 00:41:48 +0000 (0:00:00.188) 0:00:06.633 ******** 2026-04-02 00:41:49.336290 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:41:49.336300 | orchestrator | 2026-04-02 00:41:49.336311 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-02 00:41:49.336321 | orchestrator | Thursday 02 April 2026 00:41:49 +0000 (0:00:00.194) 0:00:06.827 ******** 2026-04-02 00:41:49.336332 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:41:49.336343 | orchestrator | 2026-04-02 00:41:49.336359 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-02 00:41:57.257592 | orchestrator | Thursday 02 April 2026 00:41:49 +0000 (0:00:00.181) 0:00:07.009 ******** 2026-04-02 00:41:57.257694 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:41:57.257706 | orchestrator | 2026-04-02 00:41:57.257715 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-02 00:41:57.257720 | orchestrator | Thursday 02 April 2026 00:41:49 +0000 (0:00:00.194) 0:00:07.203 ******** 2026-04-02 00:41:57.257724 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-04-02 00:41:57.257729 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-04-02 00:41:57.257734 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-04-02 00:41:57.257738 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-04-02 00:41:57.257743 | orchestrator | 2026-04-02 00:41:57.257747 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-02 00:41:57.257751 | orchestrator | Thursday 02 April 2026 00:41:50 +0000 (0:00:01.102) 0:00:08.306 ******** 2026-04-02 00:41:57.257755 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:41:57.257759 | orchestrator | 2026-04-02 00:41:57.257763 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-02 00:41:57.257767 | orchestrator | Thursday 02 April 2026 00:41:50 +0000 (0:00:00.184) 0:00:08.491 ******** 2026-04-02 00:41:57.257771 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:41:57.257778 | orchestrator | 2026-04-02 00:41:57.257784 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-02 00:41:57.257812 | orchestrator | Thursday 02 April 2026 00:41:51 +0000 (0:00:00.197) 0:00:08.689 ******** 2026-04-02 00:41:57.257818 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:41:57.257825 | orchestrator | 2026-04-02 00:41:57.257831 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-02 00:41:57.257837 | orchestrator | Thursday 02 April 2026 00:41:51 +0000 (0:00:00.206) 0:00:08.896 ******** 2026-04-02 00:41:57.257843 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:41:57.257849 | orchestrator | 2026-04-02 00:41:57.257856 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-04-02 00:41:57.257862 | orchestrator | Thursday 02 April 2026 00:41:51 +0000 (0:00:00.187) 0:00:09.083 ******** 2026-04-02 00:41:57.257869 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:41:57.257876 | orchestrator | 2026-04-02 00:41:57.257882 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-04-02 00:41:57.257888 | orchestrator | Thursday 02 April 2026 00:41:51 +0000 (0:00:00.131) 0:00:09.214 ******** 2026-04-02 00:41:57.257893 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3f9aa46c-6044-534e-8fed-f8e8e1b6cabb'}}) 2026-04-02 00:41:57.257897 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c3a3e1f2-53da-5696-b7a3-d36d02964763'}}) 2026-04-02 00:41:57.257901 | orchestrator | 2026-04-02 00:41:57.257905 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-04-02 00:41:57.257909 | orchestrator | Thursday 02 April 2026 00:41:51 +0000 (0:00:00.184) 0:00:09.398 ******** 2026-04-02 00:41:57.257915 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-3f9aa46c-6044-534e-8fed-f8e8e1b6cabb', 'data_vg': 'ceph-3f9aa46c-6044-534e-8fed-f8e8e1b6cabb'}) 2026-04-02 00:41:57.257920 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-c3a3e1f2-53da-5696-b7a3-d36d02964763', 'data_vg': 'ceph-c3a3e1f2-53da-5696-b7a3-d36d02964763'}) 2026-04-02 00:41:57.257924 | orchestrator | 2026-04-02 00:41:57.257928 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-04-02 00:41:57.257932 | orchestrator | Thursday 02 April 2026 00:41:53 +0000 (0:00:02.094) 0:00:11.493 ******** 2026-04-02 00:41:57.257936 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3f9aa46c-6044-534e-8fed-f8e8e1b6cabb', 'data_vg': 'ceph-3f9aa46c-6044-534e-8fed-f8e8e1b6cabb'})  2026-04-02 00:41:57.257961 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c3a3e1f2-53da-5696-b7a3-d36d02964763', 'data_vg': 'ceph-c3a3e1f2-53da-5696-b7a3-d36d02964763'})  2026-04-02 00:41:57.257968 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:41:57.257974 | orchestrator | 2026-04-02 00:41:57.257981 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-04-02 00:41:57.257987 | orchestrator | Thursday 02 April 2026 00:41:53 +0000 (0:00:00.162) 0:00:11.656 ******** 2026-04-02 00:41:57.257994 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-3f9aa46c-6044-534e-8fed-f8e8e1b6cabb', 'data_vg': 'ceph-3f9aa46c-6044-534e-8fed-f8e8e1b6cabb'}) 2026-04-02 00:41:57.258000 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-c3a3e1f2-53da-5696-b7a3-d36d02964763', 'data_vg': 'ceph-c3a3e1f2-53da-5696-b7a3-d36d02964763'}) 2026-04-02 00:41:57.258007 | orchestrator | 2026-04-02 00:41:57.258013 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-04-02 00:41:57.258066 | orchestrator | Thursday 02 April 2026 00:41:55 +0000 (0:00:01.493) 0:00:13.149 ******** 2026-04-02 00:41:57.258072 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3f9aa46c-6044-534e-8fed-f8e8e1b6cabb', 'data_vg': 'ceph-3f9aa46c-6044-534e-8fed-f8e8e1b6cabb'})  2026-04-02 00:41:57.258078 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c3a3e1f2-53da-5696-b7a3-d36d02964763', 'data_vg': 'ceph-c3a3e1f2-53da-5696-b7a3-d36d02964763'})  2026-04-02 00:41:57.258084 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:41:57.258090 | orchestrator | 2026-04-02 00:41:57.258097 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-04-02 00:41:57.258112 | orchestrator | Thursday 02 April 2026 00:41:55 +0000 (0:00:00.139) 0:00:13.289 ******** 2026-04-02 00:41:57.258136 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:41:57.258143 | orchestrator | 2026-04-02 00:41:57.258149 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-04-02 00:41:57.258155 | orchestrator | Thursday 02 April 2026 00:41:55 +0000 (0:00:00.123) 0:00:13.413 ******** 2026-04-02 00:41:57.258161 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3f9aa46c-6044-534e-8fed-f8e8e1b6cabb', 'data_vg': 'ceph-3f9aa46c-6044-534e-8fed-f8e8e1b6cabb'})  2026-04-02 00:41:57.258167 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c3a3e1f2-53da-5696-b7a3-d36d02964763', 'data_vg': 'ceph-c3a3e1f2-53da-5696-b7a3-d36d02964763'})  2026-04-02 00:41:57.258173 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:41:57.258179 | orchestrator | 2026-04-02 00:41:57.258185 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-04-02 00:41:57.258190 | orchestrator | Thursday 02 April 2026 00:41:56 +0000 (0:00:00.297) 0:00:13.711 ******** 2026-04-02 00:41:57.258196 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:41:57.258202 | orchestrator | 2026-04-02 00:41:57.258208 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-04-02 00:41:57.258214 | orchestrator | Thursday 02 April 2026 00:41:56 +0000 (0:00:00.127) 0:00:13.838 ******** 2026-04-02 00:41:57.258221 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3f9aa46c-6044-534e-8fed-f8e8e1b6cabb', 'data_vg': 'ceph-3f9aa46c-6044-534e-8fed-f8e8e1b6cabb'})  2026-04-02 00:41:57.258227 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c3a3e1f2-53da-5696-b7a3-d36d02964763', 'data_vg': 'ceph-c3a3e1f2-53da-5696-b7a3-d36d02964763'})  2026-04-02 00:41:57.258234 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:41:57.258239 | orchestrator | 2026-04-02 00:41:57.258251 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-04-02 00:41:57.258257 | orchestrator | Thursday 02 April 2026 00:41:56 +0000 (0:00:00.142) 0:00:13.981 ******** 2026-04-02 00:41:57.258263 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:41:57.258269 | orchestrator | 2026-04-02 00:41:57.258275 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-04-02 00:41:57.258281 | orchestrator | Thursday 02 April 2026 00:41:56 +0000 (0:00:00.115) 0:00:14.096 ******** 2026-04-02 00:41:57.258287 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3f9aa46c-6044-534e-8fed-f8e8e1b6cabb', 'data_vg': 'ceph-3f9aa46c-6044-534e-8fed-f8e8e1b6cabb'})  2026-04-02 00:41:57.258293 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c3a3e1f2-53da-5696-b7a3-d36d02964763', 'data_vg': 'ceph-c3a3e1f2-53da-5696-b7a3-d36d02964763'})  2026-04-02 00:41:57.258299 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:41:57.258306 | orchestrator | 2026-04-02 00:41:57.258312 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-04-02 00:41:57.258318 | orchestrator | Thursday 02 April 2026 00:41:56 +0000 (0:00:00.150) 0:00:14.247 ******** 2026-04-02 00:41:57.258325 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:41:57.258333 | orchestrator | 2026-04-02 00:41:57.258355 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-04-02 00:41:57.258370 | orchestrator | Thursday 02 April 2026 00:41:56 +0000 (0:00:00.107) 0:00:14.354 ******** 2026-04-02 00:41:57.258374 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3f9aa46c-6044-534e-8fed-f8e8e1b6cabb', 'data_vg': 'ceph-3f9aa46c-6044-534e-8fed-f8e8e1b6cabb'})  2026-04-02 00:41:57.258378 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c3a3e1f2-53da-5696-b7a3-d36d02964763', 'data_vg': 'ceph-c3a3e1f2-53da-5696-b7a3-d36d02964763'})  2026-04-02 00:41:57.258382 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:41:57.258386 | orchestrator | 2026-04-02 00:41:57.258390 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-04-02 00:41:57.258400 | orchestrator | Thursday 02 April 2026 00:41:56 +0000 (0:00:00.131) 0:00:14.485 ******** 2026-04-02 00:41:57.258404 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3f9aa46c-6044-534e-8fed-f8e8e1b6cabb', 'data_vg': 'ceph-3f9aa46c-6044-534e-8fed-f8e8e1b6cabb'})  2026-04-02 00:41:57.258408 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c3a3e1f2-53da-5696-b7a3-d36d02964763', 'data_vg': 'ceph-c3a3e1f2-53da-5696-b7a3-d36d02964763'})  2026-04-02 00:41:57.258412 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:41:57.258416 | orchestrator | 2026-04-02 00:41:57.258420 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-04-02 00:41:57.258424 | orchestrator | Thursday 02 April 2026 00:41:56 +0000 (0:00:00.157) 0:00:14.643 ******** 2026-04-02 00:41:57.258427 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3f9aa46c-6044-534e-8fed-f8e8e1b6cabb', 'data_vg': 'ceph-3f9aa46c-6044-534e-8fed-f8e8e1b6cabb'})  2026-04-02 00:41:57.258431 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c3a3e1f2-53da-5696-b7a3-d36d02964763', 'data_vg': 'ceph-c3a3e1f2-53da-5696-b7a3-d36d02964763'})  2026-04-02 00:41:57.258435 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:41:57.258439 | orchestrator | 2026-04-02 00:41:57.258443 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-04-02 00:41:57.258447 | orchestrator | Thursday 02 April 2026 00:41:57 +0000 (0:00:00.156) 0:00:14.799 ******** 2026-04-02 00:41:57.258451 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:41:57.258455 | orchestrator | 2026-04-02 00:41:57.258474 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-04-02 00:41:57.258484 | orchestrator | Thursday 02 April 2026 00:41:57 +0000 (0:00:00.134) 0:00:14.934 ******** 2026-04-02 00:42:02.892517 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:42:02.892599 | orchestrator | 2026-04-02 00:42:02.892607 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-04-02 00:42:02.892614 | orchestrator | Thursday 02 April 2026 00:41:57 +0000 (0:00:00.128) 0:00:15.062 ******** 2026-04-02 00:42:02.892619 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:42:02.892623 | orchestrator | 2026-04-02 00:42:02.892629 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-04-02 00:42:02.892634 | orchestrator | Thursday 02 April 2026 00:41:57 +0000 (0:00:00.133) 0:00:15.195 ******** 2026-04-02 00:42:02.892639 | orchestrator | ok: [testbed-node-3] => { 2026-04-02 00:42:02.892645 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-04-02 00:42:02.892650 | orchestrator | } 2026-04-02 00:42:02.892655 | orchestrator | 2026-04-02 00:42:02.892660 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-04-02 00:42:02.892665 | orchestrator | Thursday 02 April 2026 00:41:57 +0000 (0:00:00.336) 0:00:15.532 ******** 2026-04-02 00:42:02.892669 | orchestrator | ok: [testbed-node-3] => { 2026-04-02 00:42:02.892674 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-04-02 00:42:02.892678 | orchestrator | } 2026-04-02 00:42:02.892683 | orchestrator | 2026-04-02 00:42:02.892688 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-04-02 00:42:02.892692 | orchestrator | Thursday 02 April 2026 00:41:57 +0000 (0:00:00.135) 0:00:15.668 ******** 2026-04-02 00:42:02.892697 | orchestrator | ok: [testbed-node-3] => { 2026-04-02 00:42:02.892702 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-04-02 00:42:02.892707 | orchestrator | } 2026-04-02 00:42:02.892711 | orchestrator | 2026-04-02 00:42:02.892716 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-04-02 00:42:02.892721 | orchestrator | Thursday 02 April 2026 00:41:58 +0000 (0:00:00.132) 0:00:15.801 ******** 2026-04-02 00:42:02.892725 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:42:02.892730 | orchestrator | 2026-04-02 00:42:02.892735 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-04-02 00:42:02.892740 | orchestrator | Thursday 02 April 2026 00:41:58 +0000 (0:00:00.657) 0:00:16.458 ******** 2026-04-02 00:42:02.892765 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:42:02.892773 | orchestrator | 2026-04-02 00:42:02.892781 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-04-02 00:42:02.892790 | orchestrator | Thursday 02 April 2026 00:41:59 +0000 (0:00:00.490) 0:00:16.949 ******** 2026-04-02 00:42:02.892798 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:42:02.892807 | orchestrator | 2026-04-02 00:42:02.892816 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-04-02 00:42:02.892824 | orchestrator | Thursday 02 April 2026 00:41:59 +0000 (0:00:00.506) 0:00:17.456 ******** 2026-04-02 00:42:02.892830 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:42:02.892835 | orchestrator | 2026-04-02 00:42:02.892840 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-04-02 00:42:02.892844 | orchestrator | Thursday 02 April 2026 00:41:59 +0000 (0:00:00.134) 0:00:17.591 ******** 2026-04-02 00:42:02.892849 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:42:02.892854 | orchestrator | 2026-04-02 00:42:02.892858 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-04-02 00:42:02.892863 | orchestrator | Thursday 02 April 2026 00:42:00 +0000 (0:00:00.090) 0:00:17.681 ******** 2026-04-02 00:42:02.892867 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:42:02.892872 | orchestrator | 2026-04-02 00:42:02.892880 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-04-02 00:42:02.892887 | orchestrator | Thursday 02 April 2026 00:42:00 +0000 (0:00:00.081) 0:00:17.763 ******** 2026-04-02 00:42:02.892894 | orchestrator | ok: [testbed-node-3] => { 2026-04-02 00:42:02.892902 | orchestrator |  "vgs_report": { 2026-04-02 00:42:02.892909 | orchestrator |  "vg": [] 2026-04-02 00:42:02.892916 | orchestrator |  } 2026-04-02 00:42:02.892923 | orchestrator | } 2026-04-02 00:42:02.892931 | orchestrator | 2026-04-02 00:42:02.892939 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-04-02 00:42:02.892945 | orchestrator | Thursday 02 April 2026 00:42:00 +0000 (0:00:00.109) 0:00:17.872 ******** 2026-04-02 00:42:02.892952 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:42:02.892959 | orchestrator | 2026-04-02 00:42:02.892965 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-04-02 00:42:02.892974 | orchestrator | Thursday 02 April 2026 00:42:00 +0000 (0:00:00.103) 0:00:17.975 ******** 2026-04-02 00:42:02.892981 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:42:02.892989 | orchestrator | 2026-04-02 00:42:02.892996 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-04-02 00:42:02.893004 | orchestrator | Thursday 02 April 2026 00:42:00 +0000 (0:00:00.095) 0:00:18.071 ******** 2026-04-02 00:42:02.893012 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:42:02.893019 | orchestrator | 2026-04-02 00:42:02.893027 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-04-02 00:42:02.893035 | orchestrator | Thursday 02 April 2026 00:42:00 +0000 (0:00:00.234) 0:00:18.305 ******** 2026-04-02 00:42:02.893042 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:42:02.893049 | orchestrator | 2026-04-02 00:42:02.893056 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-04-02 00:42:02.893064 | orchestrator | Thursday 02 April 2026 00:42:00 +0000 (0:00:00.109) 0:00:18.414 ******** 2026-04-02 00:42:02.893071 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:42:02.893077 | orchestrator | 2026-04-02 00:42:02.893083 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-04-02 00:42:02.893089 | orchestrator | Thursday 02 April 2026 00:42:00 +0000 (0:00:00.147) 0:00:18.561 ******** 2026-04-02 00:42:02.893096 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:42:02.893102 | orchestrator | 2026-04-02 00:42:02.893108 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-04-02 00:42:02.893115 | orchestrator | Thursday 02 April 2026 00:42:01 +0000 (0:00:00.136) 0:00:18.698 ******** 2026-04-02 00:42:02.893121 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:42:02.893141 | orchestrator | 2026-04-02 00:42:02.893151 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-04-02 00:42:02.893158 | orchestrator | Thursday 02 April 2026 00:42:01 +0000 (0:00:00.116) 0:00:18.814 ******** 2026-04-02 00:42:02.893181 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:42:02.893188 | orchestrator | 2026-04-02 00:42:02.893210 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-04-02 00:42:02.893218 | orchestrator | Thursday 02 April 2026 00:42:01 +0000 (0:00:00.135) 0:00:18.950 ******** 2026-04-02 00:42:02.893225 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:42:02.893231 | orchestrator | 2026-04-02 00:42:02.893238 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-04-02 00:42:02.893244 | orchestrator | Thursday 02 April 2026 00:42:01 +0000 (0:00:00.107) 0:00:19.057 ******** 2026-04-02 00:42:02.893251 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:42:02.893431 | orchestrator | 2026-04-02 00:42:02.893442 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-04-02 00:42:02.893450 | orchestrator | Thursday 02 April 2026 00:42:01 +0000 (0:00:00.150) 0:00:19.207 ******** 2026-04-02 00:42:02.893508 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:42:02.893516 | orchestrator | 2026-04-02 00:42:02.893523 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-04-02 00:42:02.893530 | orchestrator | Thursday 02 April 2026 00:42:01 +0000 (0:00:00.120) 0:00:19.328 ******** 2026-04-02 00:42:02.893537 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:42:02.893545 | orchestrator | 2026-04-02 00:42:02.893551 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-04-02 00:42:02.893559 | orchestrator | Thursday 02 April 2026 00:42:01 +0000 (0:00:00.127) 0:00:19.455 ******** 2026-04-02 00:42:02.893566 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:42:02.893574 | orchestrator | 2026-04-02 00:42:02.893581 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-04-02 00:42:02.893589 | orchestrator | Thursday 02 April 2026 00:42:01 +0000 (0:00:00.132) 0:00:19.588 ******** 2026-04-02 00:42:02.893597 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:42:02.893604 | orchestrator | 2026-04-02 00:42:02.893617 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-04-02 00:42:02.893622 | orchestrator | Thursday 02 April 2026 00:42:02 +0000 (0:00:00.112) 0:00:19.700 ******** 2026-04-02 00:42:02.893628 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3f9aa46c-6044-534e-8fed-f8e8e1b6cabb', 'data_vg': 'ceph-3f9aa46c-6044-534e-8fed-f8e8e1b6cabb'})  2026-04-02 00:42:02.893635 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c3a3e1f2-53da-5696-b7a3-d36d02964763', 'data_vg': 'ceph-c3a3e1f2-53da-5696-b7a3-d36d02964763'})  2026-04-02 00:42:02.893639 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:42:02.893643 | orchestrator | 2026-04-02 00:42:02.893647 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-04-02 00:42:02.893651 | orchestrator | Thursday 02 April 2026 00:42:02 +0000 (0:00:00.145) 0:00:19.845 ******** 2026-04-02 00:42:02.893656 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3f9aa46c-6044-534e-8fed-f8e8e1b6cabb', 'data_vg': 'ceph-3f9aa46c-6044-534e-8fed-f8e8e1b6cabb'})  2026-04-02 00:42:02.893660 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c3a3e1f2-53da-5696-b7a3-d36d02964763', 'data_vg': 'ceph-c3a3e1f2-53da-5696-b7a3-d36d02964763'})  2026-04-02 00:42:02.893664 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:42:02.893668 | orchestrator | 2026-04-02 00:42:02.893672 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-04-02 00:42:02.893676 | orchestrator | Thursday 02 April 2026 00:42:02 +0000 (0:00:00.263) 0:00:20.108 ******** 2026-04-02 00:42:02.893681 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3f9aa46c-6044-534e-8fed-f8e8e1b6cabb', 'data_vg': 'ceph-3f9aa46c-6044-534e-8fed-f8e8e1b6cabb'})  2026-04-02 00:42:02.893685 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c3a3e1f2-53da-5696-b7a3-d36d02964763', 'data_vg': 'ceph-c3a3e1f2-53da-5696-b7a3-d36d02964763'})  2026-04-02 00:42:02.893697 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:42:02.893701 | orchestrator | 2026-04-02 00:42:02.893705 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-04-02 00:42:02.893710 | orchestrator | Thursday 02 April 2026 00:42:02 +0000 (0:00:00.150) 0:00:20.259 ******** 2026-04-02 00:42:02.893714 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3f9aa46c-6044-534e-8fed-f8e8e1b6cabb', 'data_vg': 'ceph-3f9aa46c-6044-534e-8fed-f8e8e1b6cabb'})  2026-04-02 00:42:02.893718 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c3a3e1f2-53da-5696-b7a3-d36d02964763', 'data_vg': 'ceph-c3a3e1f2-53da-5696-b7a3-d36d02964763'})  2026-04-02 00:42:02.893722 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:42:02.893726 | orchestrator | 2026-04-02 00:42:02.893730 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-04-02 00:42:02.893734 | orchestrator | Thursday 02 April 2026 00:42:02 +0000 (0:00:00.135) 0:00:20.394 ******** 2026-04-02 00:42:02.893739 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3f9aa46c-6044-534e-8fed-f8e8e1b6cabb', 'data_vg': 'ceph-3f9aa46c-6044-534e-8fed-f8e8e1b6cabb'})  2026-04-02 00:42:02.893743 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c3a3e1f2-53da-5696-b7a3-d36d02964763', 'data_vg': 'ceph-c3a3e1f2-53da-5696-b7a3-d36d02964763'})  2026-04-02 00:42:02.893747 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:42:02.893751 | orchestrator | 2026-04-02 00:42:02.893755 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-04-02 00:42:02.893759 | orchestrator | Thursday 02 April 2026 00:42:02 +0000 (0:00:00.115) 0:00:20.510 ******** 2026-04-02 00:42:02.893773 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3f9aa46c-6044-534e-8fed-f8e8e1b6cabb', 'data_vg': 'ceph-3f9aa46c-6044-534e-8fed-f8e8e1b6cabb'})  2026-04-02 00:42:07.845006 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c3a3e1f2-53da-5696-b7a3-d36d02964763', 'data_vg': 'ceph-c3a3e1f2-53da-5696-b7a3-d36d02964763'})  2026-04-02 00:42:07.845112 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:42:07.845129 | orchestrator | 2026-04-02 00:42:07.845142 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-04-02 00:42:07.845154 | orchestrator | Thursday 02 April 2026 00:42:02 +0000 (0:00:00.144) 0:00:20.654 ******** 2026-04-02 00:42:07.845166 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3f9aa46c-6044-534e-8fed-f8e8e1b6cabb', 'data_vg': 'ceph-3f9aa46c-6044-534e-8fed-f8e8e1b6cabb'})  2026-04-02 00:42:07.845177 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c3a3e1f2-53da-5696-b7a3-d36d02964763', 'data_vg': 'ceph-c3a3e1f2-53da-5696-b7a3-d36d02964763'})  2026-04-02 00:42:07.845188 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:42:07.845199 | orchestrator | 2026-04-02 00:42:07.845210 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-04-02 00:42:07.845221 | orchestrator | Thursday 02 April 2026 00:42:03 +0000 (0:00:00.146) 0:00:20.801 ******** 2026-04-02 00:42:07.845232 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3f9aa46c-6044-534e-8fed-f8e8e1b6cabb', 'data_vg': 'ceph-3f9aa46c-6044-534e-8fed-f8e8e1b6cabb'})  2026-04-02 00:42:07.845260 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c3a3e1f2-53da-5696-b7a3-d36d02964763', 'data_vg': 'ceph-c3a3e1f2-53da-5696-b7a3-d36d02964763'})  2026-04-02 00:42:07.845272 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:42:07.845283 | orchestrator | 2026-04-02 00:42:07.845310 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-04-02 00:42:07.845321 | orchestrator | Thursday 02 April 2026 00:42:03 +0000 (0:00:00.138) 0:00:20.939 ******** 2026-04-02 00:42:07.845332 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:42:07.845345 | orchestrator | 2026-04-02 00:42:07.845378 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-04-02 00:42:07.845389 | orchestrator | Thursday 02 April 2026 00:42:03 +0000 (0:00:00.540) 0:00:21.479 ******** 2026-04-02 00:42:07.845400 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:42:07.845411 | orchestrator | 2026-04-02 00:42:07.845422 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-04-02 00:42:07.845433 | orchestrator | Thursday 02 April 2026 00:42:04 +0000 (0:00:00.506) 0:00:21.986 ******** 2026-04-02 00:42:07.845443 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:42:07.845522 | orchestrator | 2026-04-02 00:42:07.845534 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-04-02 00:42:07.845548 | orchestrator | Thursday 02 April 2026 00:42:04 +0000 (0:00:00.132) 0:00:22.119 ******** 2026-04-02 00:42:07.845560 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-3f9aa46c-6044-534e-8fed-f8e8e1b6cabb', 'vg_name': 'ceph-3f9aa46c-6044-534e-8fed-f8e8e1b6cabb'}) 2026-04-02 00:42:07.845575 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-c3a3e1f2-53da-5696-b7a3-d36d02964763', 'vg_name': 'ceph-c3a3e1f2-53da-5696-b7a3-d36d02964763'}) 2026-04-02 00:42:07.845587 | orchestrator | 2026-04-02 00:42:07.845601 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-04-02 00:42:07.845614 | orchestrator | Thursday 02 April 2026 00:42:04 +0000 (0:00:00.172) 0:00:22.291 ******** 2026-04-02 00:42:07.845627 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3f9aa46c-6044-534e-8fed-f8e8e1b6cabb', 'data_vg': 'ceph-3f9aa46c-6044-534e-8fed-f8e8e1b6cabb'})  2026-04-02 00:42:07.845641 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c3a3e1f2-53da-5696-b7a3-d36d02964763', 'data_vg': 'ceph-c3a3e1f2-53da-5696-b7a3-d36d02964763'})  2026-04-02 00:42:07.845654 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:42:07.845667 | orchestrator | 2026-04-02 00:42:07.845680 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-04-02 00:42:07.845693 | orchestrator | Thursday 02 April 2026 00:42:04 +0000 (0:00:00.130) 0:00:22.422 ******** 2026-04-02 00:42:07.845706 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3f9aa46c-6044-534e-8fed-f8e8e1b6cabb', 'data_vg': 'ceph-3f9aa46c-6044-534e-8fed-f8e8e1b6cabb'})  2026-04-02 00:42:07.845719 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c3a3e1f2-53da-5696-b7a3-d36d02964763', 'data_vg': 'ceph-c3a3e1f2-53da-5696-b7a3-d36d02964763'})  2026-04-02 00:42:07.845733 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:42:07.845744 | orchestrator | 2026-04-02 00:42:07.845755 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-04-02 00:42:07.845765 | orchestrator | Thursday 02 April 2026 00:42:05 +0000 (0:00:00.265) 0:00:22.688 ******** 2026-04-02 00:42:07.845776 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3f9aa46c-6044-534e-8fed-f8e8e1b6cabb', 'data_vg': 'ceph-3f9aa46c-6044-534e-8fed-f8e8e1b6cabb'})  2026-04-02 00:42:07.845788 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c3a3e1f2-53da-5696-b7a3-d36d02964763', 'data_vg': 'ceph-c3a3e1f2-53da-5696-b7a3-d36d02964763'})  2026-04-02 00:42:07.845799 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:42:07.845809 | orchestrator | 2026-04-02 00:42:07.845820 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-04-02 00:42:07.845831 | orchestrator | Thursday 02 April 2026 00:42:05 +0000 (0:00:00.126) 0:00:22.814 ******** 2026-04-02 00:42:07.845860 | orchestrator | ok: [testbed-node-3] => { 2026-04-02 00:42:07.845872 | orchestrator |  "lvm_report": { 2026-04-02 00:42:07.845883 | orchestrator |  "lv": [ 2026-04-02 00:42:07.845894 | orchestrator |  { 2026-04-02 00:42:07.845905 | orchestrator |  "lv_name": "osd-block-3f9aa46c-6044-534e-8fed-f8e8e1b6cabb", 2026-04-02 00:42:07.845917 | orchestrator |  "vg_name": "ceph-3f9aa46c-6044-534e-8fed-f8e8e1b6cabb" 2026-04-02 00:42:07.845927 | orchestrator |  }, 2026-04-02 00:42:07.845947 | orchestrator |  { 2026-04-02 00:42:07.845958 | orchestrator |  "lv_name": "osd-block-c3a3e1f2-53da-5696-b7a3-d36d02964763", 2026-04-02 00:42:07.845969 | orchestrator |  "vg_name": "ceph-c3a3e1f2-53da-5696-b7a3-d36d02964763" 2026-04-02 00:42:07.845980 | orchestrator |  } 2026-04-02 00:42:07.845990 | orchestrator |  ], 2026-04-02 00:42:07.846001 | orchestrator |  "pv": [ 2026-04-02 00:42:07.846012 | orchestrator |  { 2026-04-02 00:42:07.846101 | orchestrator |  "pv_name": "/dev/sdb", 2026-04-02 00:42:07.846113 | orchestrator |  "vg_name": "ceph-3f9aa46c-6044-534e-8fed-f8e8e1b6cabb" 2026-04-02 00:42:07.846124 | orchestrator |  }, 2026-04-02 00:42:07.846135 | orchestrator |  { 2026-04-02 00:42:07.846145 | orchestrator |  "pv_name": "/dev/sdc", 2026-04-02 00:42:07.846157 | orchestrator |  "vg_name": "ceph-c3a3e1f2-53da-5696-b7a3-d36d02964763" 2026-04-02 00:42:07.846203 | orchestrator |  } 2026-04-02 00:42:07.846215 | orchestrator |  ] 2026-04-02 00:42:07.846226 | orchestrator |  } 2026-04-02 00:42:07.846237 | orchestrator | } 2026-04-02 00:42:07.846248 | orchestrator | 2026-04-02 00:42:07.846259 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-04-02 00:42:07.846270 | orchestrator | 2026-04-02 00:42:07.846281 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-02 00:42:07.846293 | orchestrator | Thursday 02 April 2026 00:42:05 +0000 (0:00:00.269) 0:00:23.083 ******** 2026-04-02 00:42:07.846305 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-04-02 00:42:07.846316 | orchestrator | 2026-04-02 00:42:07.846327 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-02 00:42:07.846338 | orchestrator | Thursday 02 April 2026 00:42:05 +0000 (0:00:00.246) 0:00:23.329 ******** 2026-04-02 00:42:07.846349 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:42:07.846360 | orchestrator | 2026-04-02 00:42:07.846371 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-02 00:42:07.846382 | orchestrator | Thursday 02 April 2026 00:42:05 +0000 (0:00:00.233) 0:00:23.563 ******** 2026-04-02 00:42:07.846400 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-04-02 00:42:07.846420 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-04-02 00:42:07.846438 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-04-02 00:42:07.846482 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-04-02 00:42:07.846500 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-04-02 00:42:07.846516 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-04-02 00:42:07.846532 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-04-02 00:42:07.846549 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-04-02 00:42:07.846567 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-04-02 00:42:07.846610 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-04-02 00:42:07.846628 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-04-02 00:42:07.846647 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-04-02 00:42:07.846665 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-04-02 00:42:07.846682 | orchestrator | 2026-04-02 00:42:07.846701 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-02 00:42:07.846719 | orchestrator | Thursday 02 April 2026 00:42:06 +0000 (0:00:00.398) 0:00:23.961 ******** 2026-04-02 00:42:07.846738 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:42:07.846770 | orchestrator | 2026-04-02 00:42:07.846787 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-02 00:42:07.846805 | orchestrator | Thursday 02 April 2026 00:42:06 +0000 (0:00:00.202) 0:00:24.163 ******** 2026-04-02 00:42:07.846823 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:42:07.846840 | orchestrator | 2026-04-02 00:42:07.846857 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-02 00:42:07.846875 | orchestrator | Thursday 02 April 2026 00:42:06 +0000 (0:00:00.198) 0:00:24.362 ******** 2026-04-02 00:42:07.846892 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:42:07.846906 | orchestrator | 2026-04-02 00:42:07.846922 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-02 00:42:07.846938 | orchestrator | Thursday 02 April 2026 00:42:06 +0000 (0:00:00.205) 0:00:24.568 ******** 2026-04-02 00:42:07.846954 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:42:07.846971 | orchestrator | 2026-04-02 00:42:07.846989 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-02 00:42:07.847006 | orchestrator | Thursday 02 April 2026 00:42:07 +0000 (0:00:00.576) 0:00:25.145 ******** 2026-04-02 00:42:07.847024 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:42:07.847039 | orchestrator | 2026-04-02 00:42:07.847056 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-02 00:42:07.847073 | orchestrator | Thursday 02 April 2026 00:42:07 +0000 (0:00:00.196) 0:00:25.342 ******** 2026-04-02 00:42:07.847092 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:42:07.847110 | orchestrator | 2026-04-02 00:42:07.847146 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-02 00:42:17.470155 | orchestrator | Thursday 02 April 2026 00:42:07 +0000 (0:00:00.179) 0:00:25.521 ******** 2026-04-02 00:42:17.470250 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:42:17.470267 | orchestrator | 2026-04-02 00:42:17.470281 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-02 00:42:17.470294 | orchestrator | Thursday 02 April 2026 00:42:08 +0000 (0:00:00.177) 0:00:25.698 ******** 2026-04-02 00:42:17.470306 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:42:17.470318 | orchestrator | 2026-04-02 00:42:17.470329 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-02 00:42:17.470342 | orchestrator | Thursday 02 April 2026 00:42:08 +0000 (0:00:00.187) 0:00:25.886 ******** 2026-04-02 00:42:17.470355 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_0032d447-591d-4b7c-93ad-b7b900e6d05d) 2026-04-02 00:42:17.470369 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_0032d447-591d-4b7c-93ad-b7b900e6d05d) 2026-04-02 00:42:17.470382 | orchestrator | 2026-04-02 00:42:17.470395 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-02 00:42:17.470408 | orchestrator | Thursday 02 April 2026 00:42:08 +0000 (0:00:00.378) 0:00:26.264 ******** 2026-04-02 00:42:17.470420 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_9b931cae-7a06-4f63-bca1-6514ca0f11b4) 2026-04-02 00:42:17.470433 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_9b931cae-7a06-4f63-bca1-6514ca0f11b4) 2026-04-02 00:42:17.470518 | orchestrator | 2026-04-02 00:42:17.470554 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-02 00:42:17.470565 | orchestrator | Thursday 02 April 2026 00:42:08 +0000 (0:00:00.382) 0:00:26.647 ******** 2026-04-02 00:42:17.470577 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_5e29c7a5-f411-44d3-9f54-46e8ba073aaf) 2026-04-02 00:42:17.470590 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_5e29c7a5-f411-44d3-9f54-46e8ba073aaf) 2026-04-02 00:42:17.470602 | orchestrator | 2026-04-02 00:42:17.470615 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-02 00:42:17.470627 | orchestrator | Thursday 02 April 2026 00:42:09 +0000 (0:00:00.397) 0:00:27.045 ******** 2026-04-02 00:42:17.470639 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_70d9d3d1-3bd9-44af-99af-3d2d8f65a3c3) 2026-04-02 00:42:17.470674 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_70d9d3d1-3bd9-44af-99af-3d2d8f65a3c3) 2026-04-02 00:42:17.470687 | orchestrator | 2026-04-02 00:42:17.470698 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-02 00:42:17.470710 | orchestrator | Thursday 02 April 2026 00:42:09 +0000 (0:00:00.405) 0:00:27.450 ******** 2026-04-02 00:42:17.470723 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-02 00:42:17.470735 | orchestrator | 2026-04-02 00:42:17.470748 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-02 00:42:17.470761 | orchestrator | Thursday 02 April 2026 00:42:10 +0000 (0:00:00.311) 0:00:27.762 ******** 2026-04-02 00:42:17.470774 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-04-02 00:42:17.470789 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-04-02 00:42:17.470802 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-04-02 00:42:17.470816 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-04-02 00:42:17.470830 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-04-02 00:42:17.470843 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-04-02 00:42:17.470856 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-04-02 00:42:17.470870 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-04-02 00:42:17.470884 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-04-02 00:42:17.470897 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-04-02 00:42:17.470910 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-04-02 00:42:17.470923 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-04-02 00:42:17.470937 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-04-02 00:42:17.470950 | orchestrator | 2026-04-02 00:42:17.470964 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-02 00:42:17.470978 | orchestrator | Thursday 02 April 2026 00:42:10 +0000 (0:00:00.493) 0:00:28.255 ******** 2026-04-02 00:42:17.470991 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:42:17.470999 | orchestrator | 2026-04-02 00:42:17.471006 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-02 00:42:17.471014 | orchestrator | Thursday 02 April 2026 00:42:10 +0000 (0:00:00.225) 0:00:28.480 ******** 2026-04-02 00:42:17.471021 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:42:17.471028 | orchestrator | 2026-04-02 00:42:17.471035 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-02 00:42:17.471043 | orchestrator | Thursday 02 April 2026 00:42:10 +0000 (0:00:00.168) 0:00:28.648 ******** 2026-04-02 00:42:17.471050 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:42:17.471057 | orchestrator | 2026-04-02 00:42:17.471082 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-02 00:42:17.471094 | orchestrator | Thursday 02 April 2026 00:42:11 +0000 (0:00:00.183) 0:00:28.832 ******** 2026-04-02 00:42:17.471106 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:42:17.471118 | orchestrator | 2026-04-02 00:42:17.471130 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-02 00:42:17.471142 | orchestrator | Thursday 02 April 2026 00:42:11 +0000 (0:00:00.181) 0:00:29.014 ******** 2026-04-02 00:42:17.471152 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:42:17.471164 | orchestrator | 2026-04-02 00:42:17.471175 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-02 00:42:17.471199 | orchestrator | Thursday 02 April 2026 00:42:11 +0000 (0:00:00.183) 0:00:29.197 ******** 2026-04-02 00:42:17.471212 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:42:17.471225 | orchestrator | 2026-04-02 00:42:17.471237 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-02 00:42:17.471250 | orchestrator | Thursday 02 April 2026 00:42:11 +0000 (0:00:00.185) 0:00:29.383 ******** 2026-04-02 00:42:17.471263 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:42:17.471275 | orchestrator | 2026-04-02 00:42:17.471287 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-02 00:42:17.471300 | orchestrator | Thursday 02 April 2026 00:42:11 +0000 (0:00:00.183) 0:00:29.566 ******** 2026-04-02 00:42:17.471308 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:42:17.471315 | orchestrator | 2026-04-02 00:42:17.471322 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-02 00:42:17.471338 | orchestrator | Thursday 02 April 2026 00:42:12 +0000 (0:00:00.184) 0:00:29.751 ******** 2026-04-02 00:42:17.471345 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-04-02 00:42:17.471352 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-04-02 00:42:17.471360 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-04-02 00:42:17.471367 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-04-02 00:42:17.471374 | orchestrator | 2026-04-02 00:42:17.471381 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-02 00:42:17.471388 | orchestrator | Thursday 02 April 2026 00:42:12 +0000 (0:00:00.727) 0:00:30.478 ******** 2026-04-02 00:42:17.471396 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:42:17.471403 | orchestrator | 2026-04-02 00:42:17.471410 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-02 00:42:17.471417 | orchestrator | Thursday 02 April 2026 00:42:12 +0000 (0:00:00.199) 0:00:30.678 ******** 2026-04-02 00:42:17.471424 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:42:17.471431 | orchestrator | 2026-04-02 00:42:17.471464 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-02 00:42:17.471476 | orchestrator | Thursday 02 April 2026 00:42:13 +0000 (0:00:00.185) 0:00:30.864 ******** 2026-04-02 00:42:17.471483 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:42:17.471490 | orchestrator | 2026-04-02 00:42:17.471498 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-02 00:42:17.471505 | orchestrator | Thursday 02 April 2026 00:42:13 +0000 (0:00:00.499) 0:00:31.363 ******** 2026-04-02 00:42:17.471512 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:42:17.471519 | orchestrator | 2026-04-02 00:42:17.471526 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-04-02 00:42:17.471533 | orchestrator | Thursday 02 April 2026 00:42:13 +0000 (0:00:00.203) 0:00:31.567 ******** 2026-04-02 00:42:17.471540 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:42:17.471547 | orchestrator | 2026-04-02 00:42:17.471555 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-04-02 00:42:17.471562 | orchestrator | Thursday 02 April 2026 00:42:14 +0000 (0:00:00.129) 0:00:31.696 ******** 2026-04-02 00:42:17.471569 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '88a5a1a0-9236-5c9d-8025-e39ec03fb505'}}) 2026-04-02 00:42:17.471577 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b27c5b00-4597-5124-934a-fd641c3feb65'}}) 2026-04-02 00:42:17.471584 | orchestrator | 2026-04-02 00:42:17.471591 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-04-02 00:42:17.471599 | orchestrator | Thursday 02 April 2026 00:42:14 +0000 (0:00:00.229) 0:00:31.926 ******** 2026-04-02 00:42:17.471607 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-88a5a1a0-9236-5c9d-8025-e39ec03fb505', 'data_vg': 'ceph-88a5a1a0-9236-5c9d-8025-e39ec03fb505'}) 2026-04-02 00:42:17.471616 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-b27c5b00-4597-5124-934a-fd641c3feb65', 'data_vg': 'ceph-b27c5b00-4597-5124-934a-fd641c3feb65'}) 2026-04-02 00:42:17.471630 | orchestrator | 2026-04-02 00:42:17.471637 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-04-02 00:42:17.471644 | orchestrator | Thursday 02 April 2026 00:42:16 +0000 (0:00:01.831) 0:00:33.757 ******** 2026-04-02 00:42:17.471651 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-88a5a1a0-9236-5c9d-8025-e39ec03fb505', 'data_vg': 'ceph-88a5a1a0-9236-5c9d-8025-e39ec03fb505'})  2026-04-02 00:42:17.471660 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b27c5b00-4597-5124-934a-fd641c3feb65', 'data_vg': 'ceph-b27c5b00-4597-5124-934a-fd641c3feb65'})  2026-04-02 00:42:17.471668 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:42:17.471675 | orchestrator | 2026-04-02 00:42:17.471682 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-04-02 00:42:17.471689 | orchestrator | Thursday 02 April 2026 00:42:16 +0000 (0:00:00.164) 0:00:33.922 ******** 2026-04-02 00:42:17.471696 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-88a5a1a0-9236-5c9d-8025-e39ec03fb505', 'data_vg': 'ceph-88a5a1a0-9236-5c9d-8025-e39ec03fb505'}) 2026-04-02 00:42:17.471712 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-b27c5b00-4597-5124-934a-fd641c3feb65', 'data_vg': 'ceph-b27c5b00-4597-5124-934a-fd641c3feb65'}) 2026-04-02 00:42:23.016586 | orchestrator | 2026-04-02 00:42:23.016710 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-04-02 00:42:23.016727 | orchestrator | Thursday 02 April 2026 00:42:17 +0000 (0:00:01.292) 0:00:35.214 ******** 2026-04-02 00:42:23.016739 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-88a5a1a0-9236-5c9d-8025-e39ec03fb505', 'data_vg': 'ceph-88a5a1a0-9236-5c9d-8025-e39ec03fb505'})  2026-04-02 00:42:23.016752 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b27c5b00-4597-5124-934a-fd641c3feb65', 'data_vg': 'ceph-b27c5b00-4597-5124-934a-fd641c3feb65'})  2026-04-02 00:42:23.016763 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:42:23.016775 | orchestrator | 2026-04-02 00:42:23.016787 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-04-02 00:42:23.016798 | orchestrator | Thursday 02 April 2026 00:42:17 +0000 (0:00:00.141) 0:00:35.356 ******** 2026-04-02 00:42:23.016809 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:42:23.016819 | orchestrator | 2026-04-02 00:42:23.016830 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-04-02 00:42:23.016841 | orchestrator | Thursday 02 April 2026 00:42:17 +0000 (0:00:00.135) 0:00:35.492 ******** 2026-04-02 00:42:23.016853 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-88a5a1a0-9236-5c9d-8025-e39ec03fb505', 'data_vg': 'ceph-88a5a1a0-9236-5c9d-8025-e39ec03fb505'})  2026-04-02 00:42:23.016864 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b27c5b00-4597-5124-934a-fd641c3feb65', 'data_vg': 'ceph-b27c5b00-4597-5124-934a-fd641c3feb65'})  2026-04-02 00:42:23.016875 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:42:23.016886 | orchestrator | 2026-04-02 00:42:23.016897 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-04-02 00:42:23.016908 | orchestrator | Thursday 02 April 2026 00:42:17 +0000 (0:00:00.153) 0:00:35.645 ******** 2026-04-02 00:42:23.016919 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:42:23.016929 | orchestrator | 2026-04-02 00:42:23.016940 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-04-02 00:42:23.016951 | orchestrator | Thursday 02 April 2026 00:42:18 +0000 (0:00:00.135) 0:00:35.781 ******** 2026-04-02 00:42:23.016962 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-88a5a1a0-9236-5c9d-8025-e39ec03fb505', 'data_vg': 'ceph-88a5a1a0-9236-5c9d-8025-e39ec03fb505'})  2026-04-02 00:42:23.016973 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b27c5b00-4597-5124-934a-fd641c3feb65', 'data_vg': 'ceph-b27c5b00-4597-5124-934a-fd641c3feb65'})  2026-04-02 00:42:23.017008 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:42:23.017019 | orchestrator | 2026-04-02 00:42:23.017030 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-04-02 00:42:23.017041 | orchestrator | Thursday 02 April 2026 00:42:18 +0000 (0:00:00.145) 0:00:35.927 ******** 2026-04-02 00:42:23.017052 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:42:23.017063 | orchestrator | 2026-04-02 00:42:23.017095 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-04-02 00:42:23.017106 | orchestrator | Thursday 02 April 2026 00:42:18 +0000 (0:00:00.346) 0:00:36.273 ******** 2026-04-02 00:42:23.017117 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-88a5a1a0-9236-5c9d-8025-e39ec03fb505', 'data_vg': 'ceph-88a5a1a0-9236-5c9d-8025-e39ec03fb505'})  2026-04-02 00:42:23.017128 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b27c5b00-4597-5124-934a-fd641c3feb65', 'data_vg': 'ceph-b27c5b00-4597-5124-934a-fd641c3feb65'})  2026-04-02 00:42:23.017139 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:42:23.017150 | orchestrator | 2026-04-02 00:42:23.017160 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-04-02 00:42:23.017171 | orchestrator | Thursday 02 April 2026 00:42:18 +0000 (0:00:00.155) 0:00:36.428 ******** 2026-04-02 00:42:23.017182 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:42:23.017194 | orchestrator | 2026-04-02 00:42:23.017205 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-04-02 00:42:23.017216 | orchestrator | Thursday 02 April 2026 00:42:18 +0000 (0:00:00.144) 0:00:36.572 ******** 2026-04-02 00:42:23.017227 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-88a5a1a0-9236-5c9d-8025-e39ec03fb505', 'data_vg': 'ceph-88a5a1a0-9236-5c9d-8025-e39ec03fb505'})  2026-04-02 00:42:23.017238 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b27c5b00-4597-5124-934a-fd641c3feb65', 'data_vg': 'ceph-b27c5b00-4597-5124-934a-fd641c3feb65'})  2026-04-02 00:42:23.017249 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:42:23.017260 | orchestrator | 2026-04-02 00:42:23.017270 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-04-02 00:42:23.017281 | orchestrator | Thursday 02 April 2026 00:42:19 +0000 (0:00:00.155) 0:00:36.728 ******** 2026-04-02 00:42:23.017292 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-88a5a1a0-9236-5c9d-8025-e39ec03fb505', 'data_vg': 'ceph-88a5a1a0-9236-5c9d-8025-e39ec03fb505'})  2026-04-02 00:42:23.017303 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b27c5b00-4597-5124-934a-fd641c3feb65', 'data_vg': 'ceph-b27c5b00-4597-5124-934a-fd641c3feb65'})  2026-04-02 00:42:23.017314 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:42:23.017324 | orchestrator | 2026-04-02 00:42:23.017336 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-04-02 00:42:23.017367 | orchestrator | Thursday 02 April 2026 00:42:19 +0000 (0:00:00.149) 0:00:36.877 ******** 2026-04-02 00:42:23.017379 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-88a5a1a0-9236-5c9d-8025-e39ec03fb505', 'data_vg': 'ceph-88a5a1a0-9236-5c9d-8025-e39ec03fb505'})  2026-04-02 00:42:23.017390 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b27c5b00-4597-5124-934a-fd641c3feb65', 'data_vg': 'ceph-b27c5b00-4597-5124-934a-fd641c3feb65'})  2026-04-02 00:42:23.017401 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:42:23.017412 | orchestrator | 2026-04-02 00:42:23.017426 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-04-02 00:42:23.017478 | orchestrator | Thursday 02 April 2026 00:42:19 +0000 (0:00:00.157) 0:00:37.034 ******** 2026-04-02 00:42:23.017500 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:42:23.017519 | orchestrator | 2026-04-02 00:42:23.017539 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-04-02 00:42:23.017557 | orchestrator | Thursday 02 April 2026 00:42:19 +0000 (0:00:00.133) 0:00:37.168 ******** 2026-04-02 00:42:23.017591 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:42:23.017611 | orchestrator | 2026-04-02 00:42:23.017630 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-04-02 00:42:23.017657 | orchestrator | Thursday 02 April 2026 00:42:19 +0000 (0:00:00.133) 0:00:37.302 ******** 2026-04-02 00:42:23.017677 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:42:23.017696 | orchestrator | 2026-04-02 00:42:23.017713 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-04-02 00:42:23.017732 | orchestrator | Thursday 02 April 2026 00:42:19 +0000 (0:00:00.137) 0:00:37.440 ******** 2026-04-02 00:42:23.017751 | orchestrator | ok: [testbed-node-4] => { 2026-04-02 00:42:23.017768 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-04-02 00:42:23.017787 | orchestrator | } 2026-04-02 00:42:23.017805 | orchestrator | 2026-04-02 00:42:23.017824 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-04-02 00:42:23.017844 | orchestrator | Thursday 02 April 2026 00:42:19 +0000 (0:00:00.157) 0:00:37.598 ******** 2026-04-02 00:42:23.017865 | orchestrator | ok: [testbed-node-4] => { 2026-04-02 00:42:23.017884 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-04-02 00:42:23.017904 | orchestrator | } 2026-04-02 00:42:23.017921 | orchestrator | 2026-04-02 00:42:23.017933 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-04-02 00:42:23.017943 | orchestrator | Thursday 02 April 2026 00:42:20 +0000 (0:00:00.149) 0:00:37.747 ******** 2026-04-02 00:42:23.017954 | orchestrator | ok: [testbed-node-4] => { 2026-04-02 00:42:23.017965 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-04-02 00:42:23.017977 | orchestrator | } 2026-04-02 00:42:23.017987 | orchestrator | 2026-04-02 00:42:23.017999 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-04-02 00:42:23.018010 | orchestrator | Thursday 02 April 2026 00:42:20 +0000 (0:00:00.139) 0:00:37.887 ******** 2026-04-02 00:42:23.018087 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:42:23.018099 | orchestrator | 2026-04-02 00:42:23.018110 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-04-02 00:42:23.018121 | orchestrator | Thursday 02 April 2026 00:42:20 +0000 (0:00:00.710) 0:00:38.597 ******** 2026-04-02 00:42:23.018132 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:42:23.018143 | orchestrator | 2026-04-02 00:42:23.018153 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-04-02 00:42:23.018164 | orchestrator | Thursday 02 April 2026 00:42:21 +0000 (0:00:00.515) 0:00:39.112 ******** 2026-04-02 00:42:23.018175 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:42:23.018186 | orchestrator | 2026-04-02 00:42:23.018196 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-04-02 00:42:23.018207 | orchestrator | Thursday 02 April 2026 00:42:21 +0000 (0:00:00.493) 0:00:39.606 ******** 2026-04-02 00:42:23.018218 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:42:23.018229 | orchestrator | 2026-04-02 00:42:23.018240 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-04-02 00:42:23.018250 | orchestrator | Thursday 02 April 2026 00:42:22 +0000 (0:00:00.139) 0:00:39.745 ******** 2026-04-02 00:42:23.018261 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:42:23.018272 | orchestrator | 2026-04-02 00:42:23.018283 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-04-02 00:42:23.018294 | orchestrator | Thursday 02 April 2026 00:42:22 +0000 (0:00:00.132) 0:00:39.877 ******** 2026-04-02 00:42:23.018305 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:42:23.018316 | orchestrator | 2026-04-02 00:42:23.018326 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-04-02 00:42:23.018337 | orchestrator | Thursday 02 April 2026 00:42:22 +0000 (0:00:00.131) 0:00:40.009 ******** 2026-04-02 00:42:23.018348 | orchestrator | ok: [testbed-node-4] => { 2026-04-02 00:42:23.018359 | orchestrator |  "vgs_report": { 2026-04-02 00:42:23.018370 | orchestrator |  "vg": [] 2026-04-02 00:42:23.018382 | orchestrator |  } 2026-04-02 00:42:23.018392 | orchestrator | } 2026-04-02 00:42:23.018415 | orchestrator | 2026-04-02 00:42:23.018426 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-04-02 00:42:23.018459 | orchestrator | Thursday 02 April 2026 00:42:22 +0000 (0:00:00.153) 0:00:40.162 ******** 2026-04-02 00:42:23.018470 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:42:23.018481 | orchestrator | 2026-04-02 00:42:23.018492 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-04-02 00:42:23.018503 | orchestrator | Thursday 02 April 2026 00:42:22 +0000 (0:00:00.147) 0:00:40.310 ******** 2026-04-02 00:42:23.018514 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:42:23.018525 | orchestrator | 2026-04-02 00:42:23.018536 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-04-02 00:42:23.018547 | orchestrator | Thursday 02 April 2026 00:42:22 +0000 (0:00:00.115) 0:00:40.426 ******** 2026-04-02 00:42:23.018557 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:42:23.018568 | orchestrator | 2026-04-02 00:42:23.018579 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-04-02 00:42:23.018590 | orchestrator | Thursday 02 April 2026 00:42:22 +0000 (0:00:00.139) 0:00:40.565 ******** 2026-04-02 00:42:23.018601 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:42:23.018625 | orchestrator | 2026-04-02 00:42:23.018650 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-04-02 00:42:27.462328 | orchestrator | Thursday 02 April 2026 00:42:23 +0000 (0:00:00.127) 0:00:40.693 ******** 2026-04-02 00:42:27.462425 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:42:27.462517 | orchestrator | 2026-04-02 00:42:27.462530 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-04-02 00:42:27.462540 | orchestrator | Thursday 02 April 2026 00:42:23 +0000 (0:00:00.132) 0:00:40.825 ******** 2026-04-02 00:42:27.462550 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:42:27.462560 | orchestrator | 2026-04-02 00:42:27.462569 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-04-02 00:42:27.462579 | orchestrator | Thursday 02 April 2026 00:42:23 +0000 (0:00:00.330) 0:00:41.155 ******** 2026-04-02 00:42:27.462589 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:42:27.462598 | orchestrator | 2026-04-02 00:42:27.462608 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-04-02 00:42:27.462618 | orchestrator | Thursday 02 April 2026 00:42:23 +0000 (0:00:00.135) 0:00:41.291 ******** 2026-04-02 00:42:27.462627 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:42:27.462637 | orchestrator | 2026-04-02 00:42:27.462646 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-04-02 00:42:27.462656 | orchestrator | Thursday 02 April 2026 00:42:23 +0000 (0:00:00.140) 0:00:41.431 ******** 2026-04-02 00:42:27.462681 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:42:27.462691 | orchestrator | 2026-04-02 00:42:27.462701 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-04-02 00:42:27.462710 | orchestrator | Thursday 02 April 2026 00:42:23 +0000 (0:00:00.138) 0:00:41.570 ******** 2026-04-02 00:42:27.462720 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:42:27.462729 | orchestrator | 2026-04-02 00:42:27.462739 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-04-02 00:42:27.462749 | orchestrator | Thursday 02 April 2026 00:42:24 +0000 (0:00:00.137) 0:00:41.707 ******** 2026-04-02 00:42:27.462758 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:42:27.462768 | orchestrator | 2026-04-02 00:42:27.462777 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-04-02 00:42:27.462787 | orchestrator | Thursday 02 April 2026 00:42:24 +0000 (0:00:00.132) 0:00:41.840 ******** 2026-04-02 00:42:27.462797 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:42:27.462807 | orchestrator | 2026-04-02 00:42:27.462816 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-04-02 00:42:27.462826 | orchestrator | Thursday 02 April 2026 00:42:24 +0000 (0:00:00.143) 0:00:41.983 ******** 2026-04-02 00:42:27.462835 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:42:27.462866 | orchestrator | 2026-04-02 00:42:27.462878 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-04-02 00:42:27.462889 | orchestrator | Thursday 02 April 2026 00:42:24 +0000 (0:00:00.131) 0:00:42.115 ******** 2026-04-02 00:42:27.462900 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:42:27.462911 | orchestrator | 2026-04-02 00:42:27.462922 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-04-02 00:42:27.462933 | orchestrator | Thursday 02 April 2026 00:42:24 +0000 (0:00:00.131) 0:00:42.246 ******** 2026-04-02 00:42:27.462945 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-88a5a1a0-9236-5c9d-8025-e39ec03fb505', 'data_vg': 'ceph-88a5a1a0-9236-5c9d-8025-e39ec03fb505'})  2026-04-02 00:42:27.462958 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b27c5b00-4597-5124-934a-fd641c3feb65', 'data_vg': 'ceph-b27c5b00-4597-5124-934a-fd641c3feb65'})  2026-04-02 00:42:27.462969 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:42:27.462980 | orchestrator | 2026-04-02 00:42:27.462990 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-04-02 00:42:27.463001 | orchestrator | Thursday 02 April 2026 00:42:24 +0000 (0:00:00.158) 0:00:42.405 ******** 2026-04-02 00:42:27.463012 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-88a5a1a0-9236-5c9d-8025-e39ec03fb505', 'data_vg': 'ceph-88a5a1a0-9236-5c9d-8025-e39ec03fb505'})  2026-04-02 00:42:27.463024 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b27c5b00-4597-5124-934a-fd641c3feb65', 'data_vg': 'ceph-b27c5b00-4597-5124-934a-fd641c3feb65'})  2026-04-02 00:42:27.463035 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:42:27.463046 | orchestrator | 2026-04-02 00:42:27.463057 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-04-02 00:42:27.463068 | orchestrator | Thursday 02 April 2026 00:42:24 +0000 (0:00:00.151) 0:00:42.557 ******** 2026-04-02 00:42:27.463080 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-88a5a1a0-9236-5c9d-8025-e39ec03fb505', 'data_vg': 'ceph-88a5a1a0-9236-5c9d-8025-e39ec03fb505'})  2026-04-02 00:42:27.463091 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b27c5b00-4597-5124-934a-fd641c3feb65', 'data_vg': 'ceph-b27c5b00-4597-5124-934a-fd641c3feb65'})  2026-04-02 00:42:27.463102 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:42:27.463113 | orchestrator | 2026-04-02 00:42:27.463124 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-04-02 00:42:27.463135 | orchestrator | Thursday 02 April 2026 00:42:25 +0000 (0:00:00.154) 0:00:42.711 ******** 2026-04-02 00:42:27.463146 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-88a5a1a0-9236-5c9d-8025-e39ec03fb505', 'data_vg': 'ceph-88a5a1a0-9236-5c9d-8025-e39ec03fb505'})  2026-04-02 00:42:27.463158 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b27c5b00-4597-5124-934a-fd641c3feb65', 'data_vg': 'ceph-b27c5b00-4597-5124-934a-fd641c3feb65'})  2026-04-02 00:42:27.463169 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:42:27.463181 | orchestrator | 2026-04-02 00:42:27.463207 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-04-02 00:42:27.463218 | orchestrator | Thursday 02 April 2026 00:42:25 +0000 (0:00:00.355) 0:00:43.067 ******** 2026-04-02 00:42:27.463228 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-88a5a1a0-9236-5c9d-8025-e39ec03fb505', 'data_vg': 'ceph-88a5a1a0-9236-5c9d-8025-e39ec03fb505'})  2026-04-02 00:42:27.463237 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b27c5b00-4597-5124-934a-fd641c3feb65', 'data_vg': 'ceph-b27c5b00-4597-5124-934a-fd641c3feb65'})  2026-04-02 00:42:27.463247 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:42:27.463257 | orchestrator | 2026-04-02 00:42:27.463266 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-04-02 00:42:27.463276 | orchestrator | Thursday 02 April 2026 00:42:25 +0000 (0:00:00.164) 0:00:43.231 ******** 2026-04-02 00:42:27.463292 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-88a5a1a0-9236-5c9d-8025-e39ec03fb505', 'data_vg': 'ceph-88a5a1a0-9236-5c9d-8025-e39ec03fb505'})  2026-04-02 00:42:27.463302 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b27c5b00-4597-5124-934a-fd641c3feb65', 'data_vg': 'ceph-b27c5b00-4597-5124-934a-fd641c3feb65'})  2026-04-02 00:42:27.463312 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:42:27.463322 | orchestrator | 2026-04-02 00:42:27.463331 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-04-02 00:42:27.463341 | orchestrator | Thursday 02 April 2026 00:42:25 +0000 (0:00:00.138) 0:00:43.369 ******** 2026-04-02 00:42:27.463351 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-88a5a1a0-9236-5c9d-8025-e39ec03fb505', 'data_vg': 'ceph-88a5a1a0-9236-5c9d-8025-e39ec03fb505'})  2026-04-02 00:42:27.463361 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b27c5b00-4597-5124-934a-fd641c3feb65', 'data_vg': 'ceph-b27c5b00-4597-5124-934a-fd641c3feb65'})  2026-04-02 00:42:27.463370 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:42:27.463380 | orchestrator | 2026-04-02 00:42:27.463390 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-04-02 00:42:27.463399 | orchestrator | Thursday 02 April 2026 00:42:25 +0000 (0:00:00.154) 0:00:43.524 ******** 2026-04-02 00:42:27.463409 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-88a5a1a0-9236-5c9d-8025-e39ec03fb505', 'data_vg': 'ceph-88a5a1a0-9236-5c9d-8025-e39ec03fb505'})  2026-04-02 00:42:27.463418 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b27c5b00-4597-5124-934a-fd641c3feb65', 'data_vg': 'ceph-b27c5b00-4597-5124-934a-fd641c3feb65'})  2026-04-02 00:42:27.463428 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:42:27.463460 | orchestrator | 2026-04-02 00:42:27.463470 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-04-02 00:42:27.463480 | orchestrator | Thursday 02 April 2026 00:42:25 +0000 (0:00:00.133) 0:00:43.657 ******** 2026-04-02 00:42:27.463489 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:42:27.463499 | orchestrator | 2026-04-02 00:42:27.463509 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-04-02 00:42:27.463519 | orchestrator | Thursday 02 April 2026 00:42:26 +0000 (0:00:00.496) 0:00:44.153 ******** 2026-04-02 00:42:27.463528 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:42:27.463538 | orchestrator | 2026-04-02 00:42:27.463548 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-04-02 00:42:27.463557 | orchestrator | Thursday 02 April 2026 00:42:26 +0000 (0:00:00.521) 0:00:44.675 ******** 2026-04-02 00:42:27.463567 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:42:27.463576 | orchestrator | 2026-04-02 00:42:27.463586 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-04-02 00:42:27.463596 | orchestrator | Thursday 02 April 2026 00:42:27 +0000 (0:00:00.117) 0:00:44.792 ******** 2026-04-02 00:42:27.463606 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-88a5a1a0-9236-5c9d-8025-e39ec03fb505', 'vg_name': 'ceph-88a5a1a0-9236-5c9d-8025-e39ec03fb505'}) 2026-04-02 00:42:27.463616 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-b27c5b00-4597-5124-934a-fd641c3feb65', 'vg_name': 'ceph-b27c5b00-4597-5124-934a-fd641c3feb65'}) 2026-04-02 00:42:27.463625 | orchestrator | 2026-04-02 00:42:27.463635 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-04-02 00:42:27.463645 | orchestrator | Thursday 02 April 2026 00:42:27 +0000 (0:00:00.147) 0:00:44.940 ******** 2026-04-02 00:42:27.463655 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-88a5a1a0-9236-5c9d-8025-e39ec03fb505', 'data_vg': 'ceph-88a5a1a0-9236-5c9d-8025-e39ec03fb505'})  2026-04-02 00:42:27.463699 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b27c5b00-4597-5124-934a-fd641c3feb65', 'data_vg': 'ceph-b27c5b00-4597-5124-934a-fd641c3feb65'})  2026-04-02 00:42:27.463710 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:42:27.463726 | orchestrator | 2026-04-02 00:42:27.463736 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-04-02 00:42:27.463746 | orchestrator | Thursday 02 April 2026 00:42:27 +0000 (0:00:00.136) 0:00:45.077 ******** 2026-04-02 00:42:27.463756 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-88a5a1a0-9236-5c9d-8025-e39ec03fb505', 'data_vg': 'ceph-88a5a1a0-9236-5c9d-8025-e39ec03fb505'})  2026-04-02 00:42:27.463772 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b27c5b00-4597-5124-934a-fd641c3feb65', 'data_vg': 'ceph-b27c5b00-4597-5124-934a-fd641c3feb65'})  2026-04-02 00:42:32.729889 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:42:32.730009 | orchestrator | 2026-04-02 00:42:32.730107 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-04-02 00:42:32.730122 | orchestrator | Thursday 02 April 2026 00:42:27 +0000 (0:00:00.129) 0:00:45.206 ******** 2026-04-02 00:42:32.730134 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-88a5a1a0-9236-5c9d-8025-e39ec03fb505', 'data_vg': 'ceph-88a5a1a0-9236-5c9d-8025-e39ec03fb505'})  2026-04-02 00:42:32.730146 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b27c5b00-4597-5124-934a-fd641c3feb65', 'data_vg': 'ceph-b27c5b00-4597-5124-934a-fd641c3feb65'})  2026-04-02 00:42:32.730158 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:42:32.730169 | orchestrator | 2026-04-02 00:42:32.730184 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-04-02 00:42:32.730203 | orchestrator | Thursday 02 April 2026 00:42:27 +0000 (0:00:00.147) 0:00:45.354 ******** 2026-04-02 00:42:32.730221 | orchestrator | ok: [testbed-node-4] => { 2026-04-02 00:42:32.730240 | orchestrator |  "lvm_report": { 2026-04-02 00:42:32.730260 | orchestrator |  "lv": [ 2026-04-02 00:42:32.730314 | orchestrator |  { 2026-04-02 00:42:32.730355 | orchestrator |  "lv_name": "osd-block-88a5a1a0-9236-5c9d-8025-e39ec03fb505", 2026-04-02 00:42:32.730375 | orchestrator |  "vg_name": "ceph-88a5a1a0-9236-5c9d-8025-e39ec03fb505" 2026-04-02 00:42:32.730387 | orchestrator |  }, 2026-04-02 00:42:32.730398 | orchestrator |  { 2026-04-02 00:42:32.730424 | orchestrator |  "lv_name": "osd-block-b27c5b00-4597-5124-934a-fd641c3feb65", 2026-04-02 00:42:32.730466 | orchestrator |  "vg_name": "ceph-b27c5b00-4597-5124-934a-fd641c3feb65" 2026-04-02 00:42:32.730481 | orchestrator |  } 2026-04-02 00:42:32.730494 | orchestrator |  ], 2026-04-02 00:42:32.730507 | orchestrator |  "pv": [ 2026-04-02 00:42:32.730520 | orchestrator |  { 2026-04-02 00:42:32.730533 | orchestrator |  "pv_name": "/dev/sdb", 2026-04-02 00:42:32.730546 | orchestrator |  "vg_name": "ceph-88a5a1a0-9236-5c9d-8025-e39ec03fb505" 2026-04-02 00:42:32.730558 | orchestrator |  }, 2026-04-02 00:42:32.730575 | orchestrator |  { 2026-04-02 00:42:32.730596 | orchestrator |  "pv_name": "/dev/sdc", 2026-04-02 00:42:32.730616 | orchestrator |  "vg_name": "ceph-b27c5b00-4597-5124-934a-fd641c3feb65" 2026-04-02 00:42:32.730645 | orchestrator |  } 2026-04-02 00:42:32.730666 | orchestrator |  ] 2026-04-02 00:42:32.730686 | orchestrator |  } 2026-04-02 00:42:32.730705 | orchestrator | } 2026-04-02 00:42:32.730740 | orchestrator | 2026-04-02 00:42:32.730759 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-04-02 00:42:32.730780 | orchestrator | 2026-04-02 00:42:32.730793 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-02 00:42:32.730804 | orchestrator | Thursday 02 April 2026 00:42:28 +0000 (0:00:00.398) 0:00:45.752 ******** 2026-04-02 00:42:32.730815 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-04-02 00:42:32.730826 | orchestrator | 2026-04-02 00:42:32.730837 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-02 00:42:32.730848 | orchestrator | Thursday 02 April 2026 00:42:28 +0000 (0:00:00.232) 0:00:45.985 ******** 2026-04-02 00:42:32.730882 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:42:32.730893 | orchestrator | 2026-04-02 00:42:32.730904 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-02 00:42:32.730915 | orchestrator | Thursday 02 April 2026 00:42:28 +0000 (0:00:00.189) 0:00:46.175 ******** 2026-04-02 00:42:32.730926 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-04-02 00:42:32.730936 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-04-02 00:42:32.730947 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-04-02 00:42:32.730961 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-04-02 00:42:32.730972 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-04-02 00:42:32.730997 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-04-02 00:42:32.731008 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-04-02 00:42:32.731019 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-04-02 00:42:32.731029 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-04-02 00:42:32.731040 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-04-02 00:42:32.731051 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-04-02 00:42:32.731062 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-04-02 00:42:32.731072 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-04-02 00:42:32.731083 | orchestrator | 2026-04-02 00:42:32.731094 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-02 00:42:32.731105 | orchestrator | Thursday 02 April 2026 00:42:28 +0000 (0:00:00.378) 0:00:46.553 ******** 2026-04-02 00:42:32.731115 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:42:32.731126 | orchestrator | 2026-04-02 00:42:32.731137 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-02 00:42:32.731147 | orchestrator | Thursday 02 April 2026 00:42:29 +0000 (0:00:00.160) 0:00:46.713 ******** 2026-04-02 00:42:32.731158 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:42:32.731169 | orchestrator | 2026-04-02 00:42:32.731180 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-02 00:42:32.731211 | orchestrator | Thursday 02 April 2026 00:42:29 +0000 (0:00:00.153) 0:00:46.866 ******** 2026-04-02 00:42:32.731222 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:42:32.731233 | orchestrator | 2026-04-02 00:42:32.731244 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-02 00:42:32.731255 | orchestrator | Thursday 02 April 2026 00:42:29 +0000 (0:00:00.175) 0:00:47.042 ******** 2026-04-02 00:42:32.731266 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:42:32.731276 | orchestrator | 2026-04-02 00:42:32.731287 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-02 00:42:32.731298 | orchestrator | Thursday 02 April 2026 00:42:29 +0000 (0:00:00.166) 0:00:47.208 ******** 2026-04-02 00:42:32.731320 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:42:32.731331 | orchestrator | 2026-04-02 00:42:32.731342 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-02 00:42:32.731353 | orchestrator | Thursday 02 April 2026 00:42:29 +0000 (0:00:00.164) 0:00:47.373 ******** 2026-04-02 00:42:32.731364 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:42:32.731374 | orchestrator | 2026-04-02 00:42:32.731385 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-02 00:42:32.731404 | orchestrator | Thursday 02 April 2026 00:42:30 +0000 (0:00:00.413) 0:00:47.786 ******** 2026-04-02 00:42:32.731415 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:42:32.731462 | orchestrator | 2026-04-02 00:42:32.731474 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-02 00:42:32.731485 | orchestrator | Thursday 02 April 2026 00:42:30 +0000 (0:00:00.169) 0:00:47.955 ******** 2026-04-02 00:42:32.731496 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:42:32.731507 | orchestrator | 2026-04-02 00:42:32.731531 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-02 00:42:32.731543 | orchestrator | Thursday 02 April 2026 00:42:30 +0000 (0:00:00.166) 0:00:48.122 ******** 2026-04-02 00:42:32.731554 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_213be582-731a-41c2-8309-28c1726af439) 2026-04-02 00:42:32.731566 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_213be582-731a-41c2-8309-28c1726af439) 2026-04-02 00:42:32.731577 | orchestrator | 2026-04-02 00:42:32.731587 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-02 00:42:32.731598 | orchestrator | Thursday 02 April 2026 00:42:30 +0000 (0:00:00.378) 0:00:48.501 ******** 2026-04-02 00:42:32.731609 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_3e41a6ee-2963-4f7f-bd44-4fc104801a1a) 2026-04-02 00:42:32.731620 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_3e41a6ee-2963-4f7f-bd44-4fc104801a1a) 2026-04-02 00:42:32.731630 | orchestrator | 2026-04-02 00:42:32.731641 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-02 00:42:32.731652 | orchestrator | Thursday 02 April 2026 00:42:31 +0000 (0:00:00.389) 0:00:48.891 ******** 2026-04-02 00:42:32.731663 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_313a743e-b82e-49a6-b933-92b7a7e896a9) 2026-04-02 00:42:32.731674 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_313a743e-b82e-49a6-b933-92b7a7e896a9) 2026-04-02 00:42:32.731685 | orchestrator | 2026-04-02 00:42:32.731695 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-02 00:42:32.731706 | orchestrator | Thursday 02 April 2026 00:42:31 +0000 (0:00:00.415) 0:00:49.306 ******** 2026-04-02 00:42:32.731717 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_01ed39e8-1eff-44a7-98b4-951368397b21) 2026-04-02 00:42:32.731728 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_01ed39e8-1eff-44a7-98b4-951368397b21) 2026-04-02 00:42:32.731739 | orchestrator | 2026-04-02 00:42:32.731750 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-02 00:42:32.731761 | orchestrator | Thursday 02 April 2026 00:42:32 +0000 (0:00:00.416) 0:00:49.723 ******** 2026-04-02 00:42:32.731771 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-02 00:42:32.731782 | orchestrator | 2026-04-02 00:42:32.731793 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-02 00:42:32.731804 | orchestrator | Thursday 02 April 2026 00:42:32 +0000 (0:00:00.339) 0:00:50.063 ******** 2026-04-02 00:42:32.731827 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-04-02 00:42:32.731838 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-04-02 00:42:32.731849 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-04-02 00:42:32.731859 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-04-02 00:42:32.731870 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-04-02 00:42:32.731881 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-04-02 00:42:32.731891 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-04-02 00:42:32.731914 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-04-02 00:42:32.731925 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-04-02 00:42:32.731943 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-04-02 00:42:32.731955 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-04-02 00:42:32.731973 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-04-02 00:42:41.225911 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-04-02 00:42:41.226128 | orchestrator | 2026-04-02 00:42:41.226160 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-02 00:42:41.226173 | orchestrator | Thursday 02 April 2026 00:42:32 +0000 (0:00:00.425) 0:00:50.488 ******** 2026-04-02 00:42:41.226185 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:42:41.226197 | orchestrator | 2026-04-02 00:42:41.226208 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-02 00:42:41.226219 | orchestrator | Thursday 02 April 2026 00:42:33 +0000 (0:00:00.196) 0:00:50.685 ******** 2026-04-02 00:42:41.226230 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:42:41.226240 | orchestrator | 2026-04-02 00:42:41.226252 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-02 00:42:41.226262 | orchestrator | Thursday 02 April 2026 00:42:33 +0000 (0:00:00.235) 0:00:50.921 ******** 2026-04-02 00:42:41.226273 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:42:41.226284 | orchestrator | 2026-04-02 00:42:41.226295 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-02 00:42:41.226323 | orchestrator | Thursday 02 April 2026 00:42:33 +0000 (0:00:00.650) 0:00:51.572 ******** 2026-04-02 00:42:41.226335 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:42:41.226346 | orchestrator | 2026-04-02 00:42:41.226357 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-02 00:42:41.226368 | orchestrator | Thursday 02 April 2026 00:42:34 +0000 (0:00:00.229) 0:00:51.801 ******** 2026-04-02 00:42:41.226379 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:42:41.226390 | orchestrator | 2026-04-02 00:42:41.226400 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-02 00:42:41.226411 | orchestrator | Thursday 02 April 2026 00:42:34 +0000 (0:00:00.210) 0:00:52.012 ******** 2026-04-02 00:42:41.226474 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:42:41.226488 | orchestrator | 2026-04-02 00:42:41.226501 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-02 00:42:41.226514 | orchestrator | Thursday 02 April 2026 00:42:34 +0000 (0:00:00.208) 0:00:52.220 ******** 2026-04-02 00:42:41.226526 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:42:41.226539 | orchestrator | 2026-04-02 00:42:41.226551 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-02 00:42:41.226564 | orchestrator | Thursday 02 April 2026 00:42:34 +0000 (0:00:00.206) 0:00:52.427 ******** 2026-04-02 00:42:41.226577 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:42:41.226590 | orchestrator | 2026-04-02 00:42:41.226602 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-02 00:42:41.226614 | orchestrator | Thursday 02 April 2026 00:42:34 +0000 (0:00:00.202) 0:00:52.629 ******** 2026-04-02 00:42:41.226628 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-04-02 00:42:41.226642 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-04-02 00:42:41.226654 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-04-02 00:42:41.226667 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-04-02 00:42:41.226679 | orchestrator | 2026-04-02 00:42:41.226691 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-02 00:42:41.226703 | orchestrator | Thursday 02 April 2026 00:42:35 +0000 (0:00:00.634) 0:00:53.263 ******** 2026-04-02 00:42:41.226716 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:42:41.226729 | orchestrator | 2026-04-02 00:42:41.226742 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-02 00:42:41.226777 | orchestrator | Thursday 02 April 2026 00:42:35 +0000 (0:00:00.197) 0:00:53.460 ******** 2026-04-02 00:42:41.226791 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:42:41.226803 | orchestrator | 2026-04-02 00:42:41.226814 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-02 00:42:41.226825 | orchestrator | Thursday 02 April 2026 00:42:35 +0000 (0:00:00.213) 0:00:53.674 ******** 2026-04-02 00:42:41.226835 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:42:41.226846 | orchestrator | 2026-04-02 00:42:41.226857 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-02 00:42:41.226868 | orchestrator | Thursday 02 April 2026 00:42:36 +0000 (0:00:00.203) 0:00:53.877 ******** 2026-04-02 00:42:41.226878 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:42:41.226889 | orchestrator | 2026-04-02 00:42:41.226900 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-04-02 00:42:41.226911 | orchestrator | Thursday 02 April 2026 00:42:36 +0000 (0:00:00.195) 0:00:54.073 ******** 2026-04-02 00:42:41.226923 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:42:41.226941 | orchestrator | 2026-04-02 00:42:41.226961 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-04-02 00:42:41.226979 | orchestrator | Thursday 02 April 2026 00:42:36 +0000 (0:00:00.336) 0:00:54.410 ******** 2026-04-02 00:42:41.226999 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ce3dc94c-dd22-5089-bd64-d73b3d29d8ba'}}) 2026-04-02 00:42:41.227018 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'bc329f0f-76ef-5b6a-a482-1349b51ce957'}}) 2026-04-02 00:42:41.227030 | orchestrator | 2026-04-02 00:42:41.227040 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-04-02 00:42:41.227052 | orchestrator | Thursday 02 April 2026 00:42:36 +0000 (0:00:00.180) 0:00:54.590 ******** 2026-04-02 00:42:41.227064 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-ce3dc94c-dd22-5089-bd64-d73b3d29d8ba', 'data_vg': 'ceph-ce3dc94c-dd22-5089-bd64-d73b3d29d8ba'}) 2026-04-02 00:42:41.227076 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-bc329f0f-76ef-5b6a-a482-1349b51ce957', 'data_vg': 'ceph-bc329f0f-76ef-5b6a-a482-1349b51ce957'}) 2026-04-02 00:42:41.227087 | orchestrator | 2026-04-02 00:42:41.227098 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-04-02 00:42:41.227129 | orchestrator | Thursday 02 April 2026 00:42:38 +0000 (0:00:01.804) 0:00:56.395 ******** 2026-04-02 00:42:41.227141 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ce3dc94c-dd22-5089-bd64-d73b3d29d8ba', 'data_vg': 'ceph-ce3dc94c-dd22-5089-bd64-d73b3d29d8ba'})  2026-04-02 00:42:41.227154 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bc329f0f-76ef-5b6a-a482-1349b51ce957', 'data_vg': 'ceph-bc329f0f-76ef-5b6a-a482-1349b51ce957'})  2026-04-02 00:42:41.227165 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:42:41.227176 | orchestrator | 2026-04-02 00:42:41.227186 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-04-02 00:42:41.227197 | orchestrator | Thursday 02 April 2026 00:42:38 +0000 (0:00:00.162) 0:00:56.557 ******** 2026-04-02 00:42:41.227208 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-ce3dc94c-dd22-5089-bd64-d73b3d29d8ba', 'data_vg': 'ceph-ce3dc94c-dd22-5089-bd64-d73b3d29d8ba'}) 2026-04-02 00:42:41.227220 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-bc329f0f-76ef-5b6a-a482-1349b51ce957', 'data_vg': 'ceph-bc329f0f-76ef-5b6a-a482-1349b51ce957'}) 2026-04-02 00:42:41.227230 | orchestrator | 2026-04-02 00:42:41.227241 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-04-02 00:42:41.227252 | orchestrator | Thursday 02 April 2026 00:42:40 +0000 (0:00:01.265) 0:00:57.822 ******** 2026-04-02 00:42:41.227263 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ce3dc94c-dd22-5089-bd64-d73b3d29d8ba', 'data_vg': 'ceph-ce3dc94c-dd22-5089-bd64-d73b3d29d8ba'})  2026-04-02 00:42:41.227283 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bc329f0f-76ef-5b6a-a482-1349b51ce957', 'data_vg': 'ceph-bc329f0f-76ef-5b6a-a482-1349b51ce957'})  2026-04-02 00:42:41.227295 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:42:41.227305 | orchestrator | 2026-04-02 00:42:41.227316 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-04-02 00:42:41.227327 | orchestrator | Thursday 02 April 2026 00:42:40 +0000 (0:00:00.139) 0:00:57.962 ******** 2026-04-02 00:42:41.227338 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:42:41.227349 | orchestrator | 2026-04-02 00:42:41.227360 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-04-02 00:42:41.227371 | orchestrator | Thursday 02 April 2026 00:42:40 +0000 (0:00:00.122) 0:00:58.085 ******** 2026-04-02 00:42:41.227382 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ce3dc94c-dd22-5089-bd64-d73b3d29d8ba', 'data_vg': 'ceph-ce3dc94c-dd22-5089-bd64-d73b3d29d8ba'})  2026-04-02 00:42:41.227393 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bc329f0f-76ef-5b6a-a482-1349b51ce957', 'data_vg': 'ceph-bc329f0f-76ef-5b6a-a482-1349b51ce957'})  2026-04-02 00:42:41.227404 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:42:41.227414 | orchestrator | 2026-04-02 00:42:41.227468 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-04-02 00:42:41.227480 | orchestrator | Thursday 02 April 2026 00:42:40 +0000 (0:00:00.148) 0:00:58.233 ******** 2026-04-02 00:42:41.227491 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:42:41.227502 | orchestrator | 2026-04-02 00:42:41.227513 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-04-02 00:42:41.227554 | orchestrator | Thursday 02 April 2026 00:42:40 +0000 (0:00:00.115) 0:00:58.349 ******** 2026-04-02 00:42:41.227587 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ce3dc94c-dd22-5089-bd64-d73b3d29d8ba', 'data_vg': 'ceph-ce3dc94c-dd22-5089-bd64-d73b3d29d8ba'})  2026-04-02 00:42:41.227605 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bc329f0f-76ef-5b6a-a482-1349b51ce957', 'data_vg': 'ceph-bc329f0f-76ef-5b6a-a482-1349b51ce957'})  2026-04-02 00:42:41.227622 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:42:41.227639 | orchestrator | 2026-04-02 00:42:41.227655 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-04-02 00:42:41.227673 | orchestrator | Thursday 02 April 2026 00:42:40 +0000 (0:00:00.137) 0:00:58.486 ******** 2026-04-02 00:42:41.227691 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:42:41.227710 | orchestrator | 2026-04-02 00:42:41.227727 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-04-02 00:42:41.227746 | orchestrator | Thursday 02 April 2026 00:42:40 +0000 (0:00:00.116) 0:00:58.603 ******** 2026-04-02 00:42:41.227765 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ce3dc94c-dd22-5089-bd64-d73b3d29d8ba', 'data_vg': 'ceph-ce3dc94c-dd22-5089-bd64-d73b3d29d8ba'})  2026-04-02 00:42:41.227784 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bc329f0f-76ef-5b6a-a482-1349b51ce957', 'data_vg': 'ceph-bc329f0f-76ef-5b6a-a482-1349b51ce957'})  2026-04-02 00:42:41.227803 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:42:41.227821 | orchestrator | 2026-04-02 00:42:41.227840 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-04-02 00:42:41.227859 | orchestrator | Thursday 02 April 2026 00:42:41 +0000 (0:00:00.124) 0:00:58.728 ******** 2026-04-02 00:42:41.227877 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:42:41.227897 | orchestrator | 2026-04-02 00:42:41.227915 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-04-02 00:42:41.227935 | orchestrator | Thursday 02 April 2026 00:42:41 +0000 (0:00:00.121) 0:00:58.850 ******** 2026-04-02 00:42:41.227968 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ce3dc94c-dd22-5089-bd64-d73b3d29d8ba', 'data_vg': 'ceph-ce3dc94c-dd22-5089-bd64-d73b3d29d8ba'})  2026-04-02 00:42:46.705044 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bc329f0f-76ef-5b6a-a482-1349b51ce957', 'data_vg': 'ceph-bc329f0f-76ef-5b6a-a482-1349b51ce957'})  2026-04-02 00:42:46.705169 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:42:46.705180 | orchestrator | 2026-04-02 00:42:46.705189 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-04-02 00:42:46.705199 | orchestrator | Thursday 02 April 2026 00:42:41 +0000 (0:00:00.267) 0:00:59.117 ******** 2026-04-02 00:42:46.705206 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ce3dc94c-dd22-5089-bd64-d73b3d29d8ba', 'data_vg': 'ceph-ce3dc94c-dd22-5089-bd64-d73b3d29d8ba'})  2026-04-02 00:42:46.705211 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bc329f0f-76ef-5b6a-a482-1349b51ce957', 'data_vg': 'ceph-bc329f0f-76ef-5b6a-a482-1349b51ce957'})  2026-04-02 00:42:46.705215 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:42:46.705219 | orchestrator | 2026-04-02 00:42:46.705239 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-04-02 00:42:46.705244 | orchestrator | Thursday 02 April 2026 00:42:41 +0000 (0:00:00.133) 0:00:59.250 ******** 2026-04-02 00:42:46.705247 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ce3dc94c-dd22-5089-bd64-d73b3d29d8ba', 'data_vg': 'ceph-ce3dc94c-dd22-5089-bd64-d73b3d29d8ba'})  2026-04-02 00:42:46.705252 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bc329f0f-76ef-5b6a-a482-1349b51ce957', 'data_vg': 'ceph-bc329f0f-76ef-5b6a-a482-1349b51ce957'})  2026-04-02 00:42:46.705255 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:42:46.705259 | orchestrator | 2026-04-02 00:42:46.705263 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-04-02 00:42:46.705267 | orchestrator | Thursday 02 April 2026 00:42:41 +0000 (0:00:00.119) 0:00:59.370 ******** 2026-04-02 00:42:46.705271 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:42:46.705275 | orchestrator | 2026-04-02 00:42:46.705278 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-04-02 00:42:46.705282 | orchestrator | Thursday 02 April 2026 00:42:41 +0000 (0:00:00.111) 0:00:59.481 ******** 2026-04-02 00:42:46.705286 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:42:46.705290 | orchestrator | 2026-04-02 00:42:46.705293 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-04-02 00:42:46.705297 | orchestrator | Thursday 02 April 2026 00:42:41 +0000 (0:00:00.115) 0:00:59.596 ******** 2026-04-02 00:42:46.705301 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:42:46.705305 | orchestrator | 2026-04-02 00:42:46.705309 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-04-02 00:42:46.705313 | orchestrator | Thursday 02 April 2026 00:42:42 +0000 (0:00:00.128) 0:00:59.725 ******** 2026-04-02 00:42:46.705317 | orchestrator | ok: [testbed-node-5] => { 2026-04-02 00:42:46.705322 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-04-02 00:42:46.705326 | orchestrator | } 2026-04-02 00:42:46.705331 | orchestrator | 2026-04-02 00:42:46.705334 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-04-02 00:42:46.705338 | orchestrator | Thursday 02 April 2026 00:42:42 +0000 (0:00:00.153) 0:00:59.878 ******** 2026-04-02 00:42:46.705342 | orchestrator | ok: [testbed-node-5] => { 2026-04-02 00:42:46.705346 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-04-02 00:42:46.705350 | orchestrator | } 2026-04-02 00:42:46.705353 | orchestrator | 2026-04-02 00:42:46.705357 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-04-02 00:42:46.705361 | orchestrator | Thursday 02 April 2026 00:42:42 +0000 (0:00:00.121) 0:00:59.999 ******** 2026-04-02 00:42:46.705365 | orchestrator | ok: [testbed-node-5] => { 2026-04-02 00:42:46.705369 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-04-02 00:42:46.705373 | orchestrator | } 2026-04-02 00:42:46.705376 | orchestrator | 2026-04-02 00:42:46.705380 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-04-02 00:42:46.705384 | orchestrator | Thursday 02 April 2026 00:42:42 +0000 (0:00:00.137) 0:01:00.137 ******** 2026-04-02 00:42:46.705406 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:42:46.705410 | orchestrator | 2026-04-02 00:42:46.705457 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-04-02 00:42:46.705461 | orchestrator | Thursday 02 April 2026 00:42:42 +0000 (0:00:00.505) 0:01:00.643 ******** 2026-04-02 00:42:46.705465 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:42:46.705470 | orchestrator | 2026-04-02 00:42:46.705476 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-04-02 00:42:46.705482 | orchestrator | Thursday 02 April 2026 00:42:43 +0000 (0:00:00.488) 0:01:01.131 ******** 2026-04-02 00:42:46.705488 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:42:46.705494 | orchestrator | 2026-04-02 00:42:46.705500 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-04-02 00:42:46.705506 | orchestrator | Thursday 02 April 2026 00:42:43 +0000 (0:00:00.480) 0:01:01.612 ******** 2026-04-02 00:42:46.705512 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:42:46.705518 | orchestrator | 2026-04-02 00:42:46.705524 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-04-02 00:42:46.705531 | orchestrator | Thursday 02 April 2026 00:42:44 +0000 (0:00:00.281) 0:01:01.893 ******** 2026-04-02 00:42:46.705537 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:42:46.705543 | orchestrator | 2026-04-02 00:42:46.705550 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-04-02 00:42:46.705556 | orchestrator | Thursday 02 April 2026 00:42:44 +0000 (0:00:00.122) 0:01:02.016 ******** 2026-04-02 00:42:46.705562 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:42:46.705569 | orchestrator | 2026-04-02 00:42:46.705575 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-04-02 00:42:46.705582 | orchestrator | Thursday 02 April 2026 00:42:44 +0000 (0:00:00.100) 0:01:02.116 ******** 2026-04-02 00:42:46.705588 | orchestrator | ok: [testbed-node-5] => { 2026-04-02 00:42:46.705594 | orchestrator |  "vgs_report": { 2026-04-02 00:42:46.705601 | orchestrator |  "vg": [] 2026-04-02 00:42:46.705626 | orchestrator |  } 2026-04-02 00:42:46.705634 | orchestrator | } 2026-04-02 00:42:46.705641 | orchestrator | 2026-04-02 00:42:46.705648 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-04-02 00:42:46.705655 | orchestrator | Thursday 02 April 2026 00:42:44 +0000 (0:00:00.126) 0:01:02.243 ******** 2026-04-02 00:42:46.705662 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:42:46.705668 | orchestrator | 2026-04-02 00:42:46.705675 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-04-02 00:42:46.705681 | orchestrator | Thursday 02 April 2026 00:42:44 +0000 (0:00:00.136) 0:01:02.379 ******** 2026-04-02 00:42:46.705688 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:42:46.705695 | orchestrator | 2026-04-02 00:42:46.705701 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-04-02 00:42:46.705708 | orchestrator | Thursday 02 April 2026 00:42:44 +0000 (0:00:00.129) 0:01:02.509 ******** 2026-04-02 00:42:46.705714 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:42:46.705720 | orchestrator | 2026-04-02 00:42:46.705726 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-04-02 00:42:46.705739 | orchestrator | Thursday 02 April 2026 00:42:44 +0000 (0:00:00.138) 0:01:02.647 ******** 2026-04-02 00:42:46.705746 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:42:46.705753 | orchestrator | 2026-04-02 00:42:46.705759 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-04-02 00:42:46.705765 | orchestrator | Thursday 02 April 2026 00:42:45 +0000 (0:00:00.136) 0:01:02.783 ******** 2026-04-02 00:42:46.705772 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:42:46.705778 | orchestrator | 2026-04-02 00:42:46.705784 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-04-02 00:42:46.705791 | orchestrator | Thursday 02 April 2026 00:42:45 +0000 (0:00:00.117) 0:01:02.901 ******** 2026-04-02 00:42:46.705797 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:42:46.705811 | orchestrator | 2026-04-02 00:42:46.705818 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-04-02 00:42:46.705825 | orchestrator | Thursday 02 April 2026 00:42:45 +0000 (0:00:00.095) 0:01:02.997 ******** 2026-04-02 00:42:46.705831 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:42:46.705838 | orchestrator | 2026-04-02 00:42:46.705844 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-04-02 00:42:46.705851 | orchestrator | Thursday 02 April 2026 00:42:45 +0000 (0:00:00.100) 0:01:03.098 ******** 2026-04-02 00:42:46.705857 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:42:46.705863 | orchestrator | 2026-04-02 00:42:46.705870 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-04-02 00:42:46.705877 | orchestrator | Thursday 02 April 2026 00:42:45 +0000 (0:00:00.115) 0:01:03.213 ******** 2026-04-02 00:42:46.705883 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:42:46.705890 | orchestrator | 2026-04-02 00:42:46.705896 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-04-02 00:42:46.705903 | orchestrator | Thursday 02 April 2026 00:42:45 +0000 (0:00:00.214) 0:01:03.427 ******** 2026-04-02 00:42:46.705909 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:42:46.705916 | orchestrator | 2026-04-02 00:42:46.705922 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-04-02 00:42:46.705928 | orchestrator | Thursday 02 April 2026 00:42:45 +0000 (0:00:00.096) 0:01:03.524 ******** 2026-04-02 00:42:46.705935 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:42:46.705941 | orchestrator | 2026-04-02 00:42:46.705947 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-04-02 00:42:46.705953 | orchestrator | Thursday 02 April 2026 00:42:45 +0000 (0:00:00.124) 0:01:03.648 ******** 2026-04-02 00:42:46.705960 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:42:46.705966 | orchestrator | 2026-04-02 00:42:46.705972 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-04-02 00:42:46.705978 | orchestrator | Thursday 02 April 2026 00:42:46 +0000 (0:00:00.135) 0:01:03.784 ******** 2026-04-02 00:42:46.705985 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:42:46.705991 | orchestrator | 2026-04-02 00:42:46.705997 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-04-02 00:42:46.706003 | orchestrator | Thursday 02 April 2026 00:42:46 +0000 (0:00:00.131) 0:01:03.915 ******** 2026-04-02 00:42:46.706010 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:42:46.706054 | orchestrator | 2026-04-02 00:42:46.706061 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-04-02 00:42:46.706067 | orchestrator | Thursday 02 April 2026 00:42:46 +0000 (0:00:00.133) 0:01:04.049 ******** 2026-04-02 00:42:46.706073 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ce3dc94c-dd22-5089-bd64-d73b3d29d8ba', 'data_vg': 'ceph-ce3dc94c-dd22-5089-bd64-d73b3d29d8ba'})  2026-04-02 00:42:46.706081 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bc329f0f-76ef-5b6a-a482-1349b51ce957', 'data_vg': 'ceph-bc329f0f-76ef-5b6a-a482-1349b51ce957'})  2026-04-02 00:42:46.706087 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:42:46.706093 | orchestrator | 2026-04-02 00:42:46.706100 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-04-02 00:42:46.706106 | orchestrator | Thursday 02 April 2026 00:42:46 +0000 (0:00:00.136) 0:01:04.185 ******** 2026-04-02 00:42:46.706112 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ce3dc94c-dd22-5089-bd64-d73b3d29d8ba', 'data_vg': 'ceph-ce3dc94c-dd22-5089-bd64-d73b3d29d8ba'})  2026-04-02 00:42:46.706119 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bc329f0f-76ef-5b6a-a482-1349b51ce957', 'data_vg': 'ceph-bc329f0f-76ef-5b6a-a482-1349b51ce957'})  2026-04-02 00:42:46.706126 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:42:46.706130 | orchestrator | 2026-04-02 00:42:46.706134 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-04-02 00:42:46.706145 | orchestrator | Thursday 02 April 2026 00:42:46 +0000 (0:00:00.156) 0:01:04.342 ******** 2026-04-02 00:42:46.706154 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ce3dc94c-dd22-5089-bd64-d73b3d29d8ba', 'data_vg': 'ceph-ce3dc94c-dd22-5089-bd64-d73b3d29d8ba'})  2026-04-02 00:42:49.433119 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bc329f0f-76ef-5b6a-a482-1349b51ce957', 'data_vg': 'ceph-bc329f0f-76ef-5b6a-a482-1349b51ce957'})  2026-04-02 00:42:49.433235 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:42:49.433245 | orchestrator | 2026-04-02 00:42:49.433252 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-04-02 00:42:49.433261 | orchestrator | Thursday 02 April 2026 00:42:46 +0000 (0:00:00.123) 0:01:04.465 ******** 2026-04-02 00:42:49.433268 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ce3dc94c-dd22-5089-bd64-d73b3d29d8ba', 'data_vg': 'ceph-ce3dc94c-dd22-5089-bd64-d73b3d29d8ba'})  2026-04-02 00:42:49.433295 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bc329f0f-76ef-5b6a-a482-1349b51ce957', 'data_vg': 'ceph-bc329f0f-76ef-5b6a-a482-1349b51ce957'})  2026-04-02 00:42:49.433312 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:42:49.433321 | orchestrator | 2026-04-02 00:42:49.433331 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-04-02 00:42:49.433341 | orchestrator | Thursday 02 April 2026 00:42:46 +0000 (0:00:00.131) 0:01:04.597 ******** 2026-04-02 00:42:49.433352 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ce3dc94c-dd22-5089-bd64-d73b3d29d8ba', 'data_vg': 'ceph-ce3dc94c-dd22-5089-bd64-d73b3d29d8ba'})  2026-04-02 00:42:49.433361 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bc329f0f-76ef-5b6a-a482-1349b51ce957', 'data_vg': 'ceph-bc329f0f-76ef-5b6a-a482-1349b51ce957'})  2026-04-02 00:42:49.433373 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:42:49.433379 | orchestrator | 2026-04-02 00:42:49.433385 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-04-02 00:42:49.433392 | orchestrator | Thursday 02 April 2026 00:42:47 +0000 (0:00:00.138) 0:01:04.736 ******** 2026-04-02 00:42:49.433398 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ce3dc94c-dd22-5089-bd64-d73b3d29d8ba', 'data_vg': 'ceph-ce3dc94c-dd22-5089-bd64-d73b3d29d8ba'})  2026-04-02 00:42:49.433403 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bc329f0f-76ef-5b6a-a482-1349b51ce957', 'data_vg': 'ceph-bc329f0f-76ef-5b6a-a482-1349b51ce957'})  2026-04-02 00:42:49.433409 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:42:49.433447 | orchestrator | 2026-04-02 00:42:49.433453 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-04-02 00:42:49.433458 | orchestrator | Thursday 02 April 2026 00:42:47 +0000 (0:00:00.122) 0:01:04.858 ******** 2026-04-02 00:42:49.433464 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ce3dc94c-dd22-5089-bd64-d73b3d29d8ba', 'data_vg': 'ceph-ce3dc94c-dd22-5089-bd64-d73b3d29d8ba'})  2026-04-02 00:42:49.433470 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bc329f0f-76ef-5b6a-a482-1349b51ce957', 'data_vg': 'ceph-bc329f0f-76ef-5b6a-a482-1349b51ce957'})  2026-04-02 00:42:49.433476 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:42:49.433482 | orchestrator | 2026-04-02 00:42:49.433487 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-04-02 00:42:49.433493 | orchestrator | Thursday 02 April 2026 00:42:47 +0000 (0:00:00.253) 0:01:05.112 ******** 2026-04-02 00:42:49.433499 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ce3dc94c-dd22-5089-bd64-d73b3d29d8ba', 'data_vg': 'ceph-ce3dc94c-dd22-5089-bd64-d73b3d29d8ba'})  2026-04-02 00:42:49.433505 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bc329f0f-76ef-5b6a-a482-1349b51ce957', 'data_vg': 'ceph-bc329f0f-76ef-5b6a-a482-1349b51ce957'})  2026-04-02 00:42:49.433511 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:42:49.433539 | orchestrator | 2026-04-02 00:42:49.433545 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-04-02 00:42:49.433551 | orchestrator | Thursday 02 April 2026 00:42:47 +0000 (0:00:00.131) 0:01:05.243 ******** 2026-04-02 00:42:49.433557 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:42:49.433564 | orchestrator | 2026-04-02 00:42:49.433570 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-04-02 00:42:49.433576 | orchestrator | Thursday 02 April 2026 00:42:48 +0000 (0:00:00.507) 0:01:05.750 ******** 2026-04-02 00:42:49.433581 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:42:49.433587 | orchestrator | 2026-04-02 00:42:49.433593 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-04-02 00:42:49.433598 | orchestrator | Thursday 02 April 2026 00:42:48 +0000 (0:00:00.488) 0:01:06.239 ******** 2026-04-02 00:42:49.433604 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:42:49.433610 | orchestrator | 2026-04-02 00:42:49.433616 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-04-02 00:42:49.433622 | orchestrator | Thursday 02 April 2026 00:42:48 +0000 (0:00:00.172) 0:01:06.412 ******** 2026-04-02 00:42:49.433628 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-bc329f0f-76ef-5b6a-a482-1349b51ce957', 'vg_name': 'ceph-bc329f0f-76ef-5b6a-a482-1349b51ce957'}) 2026-04-02 00:42:49.433637 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-ce3dc94c-dd22-5089-bd64-d73b3d29d8ba', 'vg_name': 'ceph-ce3dc94c-dd22-5089-bd64-d73b3d29d8ba'}) 2026-04-02 00:42:49.433644 | orchestrator | 2026-04-02 00:42:49.433651 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-04-02 00:42:49.433658 | orchestrator | Thursday 02 April 2026 00:42:48 +0000 (0:00:00.154) 0:01:06.566 ******** 2026-04-02 00:42:49.433681 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ce3dc94c-dd22-5089-bd64-d73b3d29d8ba', 'data_vg': 'ceph-ce3dc94c-dd22-5089-bd64-d73b3d29d8ba'})  2026-04-02 00:42:49.433688 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bc329f0f-76ef-5b6a-a482-1349b51ce957', 'data_vg': 'ceph-bc329f0f-76ef-5b6a-a482-1349b51ce957'})  2026-04-02 00:42:49.433695 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:42:49.433702 | orchestrator | 2026-04-02 00:42:49.433709 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-04-02 00:42:49.433716 | orchestrator | Thursday 02 April 2026 00:42:49 +0000 (0:00:00.138) 0:01:06.705 ******** 2026-04-02 00:42:49.433723 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ce3dc94c-dd22-5089-bd64-d73b3d29d8ba', 'data_vg': 'ceph-ce3dc94c-dd22-5089-bd64-d73b3d29d8ba'})  2026-04-02 00:42:49.433730 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bc329f0f-76ef-5b6a-a482-1349b51ce957', 'data_vg': 'ceph-bc329f0f-76ef-5b6a-a482-1349b51ce957'})  2026-04-02 00:42:49.433737 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:42:49.433744 | orchestrator | 2026-04-02 00:42:49.433751 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-04-02 00:42:49.433758 | orchestrator | Thursday 02 April 2026 00:42:49 +0000 (0:00:00.136) 0:01:06.841 ******** 2026-04-02 00:42:49.433764 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ce3dc94c-dd22-5089-bd64-d73b3d29d8ba', 'data_vg': 'ceph-ce3dc94c-dd22-5089-bd64-d73b3d29d8ba'})  2026-04-02 00:42:49.433770 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bc329f0f-76ef-5b6a-a482-1349b51ce957', 'data_vg': 'ceph-bc329f0f-76ef-5b6a-a482-1349b51ce957'})  2026-04-02 00:42:49.433776 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:42:49.433782 | orchestrator | 2026-04-02 00:42:49.433787 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-04-02 00:42:49.433793 | orchestrator | Thursday 02 April 2026 00:42:49 +0000 (0:00:00.134) 0:01:06.976 ******** 2026-04-02 00:42:49.433799 | orchestrator | ok: [testbed-node-5] => { 2026-04-02 00:42:49.433805 | orchestrator |  "lvm_report": { 2026-04-02 00:42:49.433811 | orchestrator |  "lv": [ 2026-04-02 00:42:49.433824 | orchestrator |  { 2026-04-02 00:42:49.433830 | orchestrator |  "lv_name": "osd-block-bc329f0f-76ef-5b6a-a482-1349b51ce957", 2026-04-02 00:42:49.433837 | orchestrator |  "vg_name": "ceph-bc329f0f-76ef-5b6a-a482-1349b51ce957" 2026-04-02 00:42:49.433843 | orchestrator |  }, 2026-04-02 00:42:49.433848 | orchestrator |  { 2026-04-02 00:42:49.433854 | orchestrator |  "lv_name": "osd-block-ce3dc94c-dd22-5089-bd64-d73b3d29d8ba", 2026-04-02 00:42:49.433860 | orchestrator |  "vg_name": "ceph-ce3dc94c-dd22-5089-bd64-d73b3d29d8ba" 2026-04-02 00:42:49.433866 | orchestrator |  } 2026-04-02 00:42:49.433872 | orchestrator |  ], 2026-04-02 00:42:49.433878 | orchestrator |  "pv": [ 2026-04-02 00:42:49.433883 | orchestrator |  { 2026-04-02 00:42:49.433889 | orchestrator |  "pv_name": "/dev/sdb", 2026-04-02 00:42:49.433895 | orchestrator |  "vg_name": "ceph-ce3dc94c-dd22-5089-bd64-d73b3d29d8ba" 2026-04-02 00:42:49.433901 | orchestrator |  }, 2026-04-02 00:42:49.433906 | orchestrator |  { 2026-04-02 00:42:49.433912 | orchestrator |  "pv_name": "/dev/sdc", 2026-04-02 00:42:49.433918 | orchestrator |  "vg_name": "ceph-bc329f0f-76ef-5b6a-a482-1349b51ce957" 2026-04-02 00:42:49.433924 | orchestrator |  } 2026-04-02 00:42:49.433930 | orchestrator |  ] 2026-04-02 00:42:49.433936 | orchestrator |  } 2026-04-02 00:42:49.433942 | orchestrator | } 2026-04-02 00:42:49.433948 | orchestrator | 2026-04-02 00:42:49.433954 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-02 00:42:49.433960 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-04-02 00:42:49.433966 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-04-02 00:42:49.433972 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-04-02 00:42:49.433977 | orchestrator | 2026-04-02 00:42:49.433983 | orchestrator | 2026-04-02 00:42:49.433989 | orchestrator | 2026-04-02 00:42:49.434002 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-02 00:42:49.434008 | orchestrator | Thursday 02 April 2026 00:42:49 +0000 (0:00:00.124) 0:01:07.101 ******** 2026-04-02 00:42:49.434065 | orchestrator | =============================================================================== 2026-04-02 00:42:49.434073 | orchestrator | Create block VGs -------------------------------------------------------- 5.73s 2026-04-02 00:42:49.434079 | orchestrator | Create block LVs -------------------------------------------------------- 4.05s 2026-04-02 00:42:49.434085 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.87s 2026-04-02 00:42:49.434091 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.54s 2026-04-02 00:42:49.434096 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.52s 2026-04-02 00:42:49.434102 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.49s 2026-04-02 00:42:49.434108 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.48s 2026-04-02 00:42:49.434113 | orchestrator | Add known partitions to the list of available block devices ------------- 1.35s 2026-04-02 00:42:49.434124 | orchestrator | Add known links to the list of available block devices ------------------ 1.17s 2026-04-02 00:42:49.695705 | orchestrator | Add known partitions to the list of available block devices ------------- 1.10s 2026-04-02 00:42:49.695774 | orchestrator | Print LVM report data --------------------------------------------------- 0.79s 2026-04-02 00:42:49.695787 | orchestrator | Add known partitions to the list of available block devices ------------- 0.73s 2026-04-02 00:42:49.695798 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.70s 2026-04-02 00:42:49.695808 | orchestrator | Add known links to the list of available block devices ------------------ 0.69s 2026-04-02 00:42:49.695851 | orchestrator | Add known partitions to the list of available block devices ------------- 0.65s 2026-04-02 00:42:49.695862 | orchestrator | Print number of OSDs wanted per DB VG ----------------------------------- 0.65s 2026-04-02 00:42:49.695889 | orchestrator | Add known links to the list of available block devices ------------------ 0.64s 2026-04-02 00:42:49.695901 | orchestrator | Add known partitions to the list of available block devices ------------- 0.63s 2026-04-02 00:42:49.695912 | orchestrator | Print 'Create WAL LVs for ceph_wal_devices' ----------------------------- 0.62s 2026-04-02 00:42:49.695923 | orchestrator | Get initial list of available block devices ----------------------------- 0.62s 2026-04-02 00:43:00.933487 | orchestrator | 2026-04-02 00:43:00 | INFO  | Prepare task for execution of facts. 2026-04-02 00:43:00.999104 | orchestrator | 2026-04-02 00:43:00 | INFO  | Task 1f3484ac-8115-4f99-9aeb-9fa92f138da0 (facts) was prepared for execution. 2026-04-02 00:43:00.999210 | orchestrator | 2026-04-02 00:43:00 | INFO  | It takes a moment until task 1f3484ac-8115-4f99-9aeb-9fa92f138da0 (facts) has been started and output is visible here. 2026-04-02 00:43:12.192759 | orchestrator | 2026-04-02 00:43:12.193837 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-04-02 00:43:12.193877 | orchestrator | 2026-04-02 00:43:12.193900 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-02 00:43:12.193918 | orchestrator | Thursday 02 April 2026 00:43:03 +0000 (0:00:00.317) 0:00:00.317 ******** 2026-04-02 00:43:12.193938 | orchestrator | ok: [testbed-manager] 2026-04-02 00:43:12.193955 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:43:12.193967 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:43:12.193978 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:43:12.193989 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:43:12.194000 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:43:12.194011 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:43:12.194081 | orchestrator | 2026-04-02 00:43:12.194093 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-02 00:43:12.194105 | orchestrator | Thursday 02 April 2026 00:43:05 +0000 (0:00:01.250) 0:00:01.568 ******** 2026-04-02 00:43:12.194116 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:43:12.194128 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:43:12.194140 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:43:12.194150 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:43:12.194161 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:43:12.194173 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:43:12.194184 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:43:12.194206 | orchestrator | 2026-04-02 00:43:12.194218 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-02 00:43:12.194229 | orchestrator | 2026-04-02 00:43:12.194241 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-02 00:43:12.194252 | orchestrator | Thursday 02 April 2026 00:43:06 +0000 (0:00:01.189) 0:00:02.757 ******** 2026-04-02 00:43:12.194263 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:43:12.194274 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:43:12.194285 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:43:12.194296 | orchestrator | ok: [testbed-manager] 2026-04-02 00:43:12.194307 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:43:12.194318 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:43:12.194329 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:43:12.194340 | orchestrator | 2026-04-02 00:43:12.194352 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-02 00:43:12.194363 | orchestrator | 2026-04-02 00:43:12.194374 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-02 00:43:12.194385 | orchestrator | Thursday 02 April 2026 00:43:11 +0000 (0:00:04.917) 0:00:07.675 ******** 2026-04-02 00:43:12.194415 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:43:12.194426 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:43:12.194467 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:43:12.194478 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:43:12.194489 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:43:12.194500 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:43:12.194511 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:43:12.194521 | orchestrator | 2026-04-02 00:43:12.194532 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-02 00:43:12.194544 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-02 00:43:12.194557 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-02 00:43:12.194568 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-02 00:43:12.194579 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-02 00:43:12.194590 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-02 00:43:12.194601 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-02 00:43:12.194612 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-02 00:43:12.194623 | orchestrator | 2026-04-02 00:43:12.194634 | orchestrator | 2026-04-02 00:43:12.194645 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-02 00:43:12.194656 | orchestrator | Thursday 02 April 2026 00:43:11 +0000 (0:00:00.515) 0:00:08.190 ******** 2026-04-02 00:43:12.194667 | orchestrator | =============================================================================== 2026-04-02 00:43:12.194678 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.92s 2026-04-02 00:43:12.194689 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.25s 2026-04-02 00:43:12.194715 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.19s 2026-04-02 00:43:12.194726 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.52s 2026-04-02 00:43:23.702180 | orchestrator | 2026-04-02 00:43:23 | INFO  | Prepare task for execution of frr. 2026-04-02 00:43:23.781087 | orchestrator | 2026-04-02 00:43:23 | INFO  | Task 64384ed1-a9bd-491f-9f8e-fc9c9f40e154 (frr) was prepared for execution. 2026-04-02 00:43:23.781185 | orchestrator | 2026-04-02 00:43:23 | INFO  | It takes a moment until task 64384ed1-a9bd-491f-9f8e-fc9c9f40e154 (frr) has been started and output is visible here. 2026-04-02 00:43:48.941568 | orchestrator | 2026-04-02 00:43:48.941660 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-04-02 00:43:48.941672 | orchestrator | 2026-04-02 00:43:48.941681 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-04-02 00:43:48.941692 | orchestrator | Thursday 02 April 2026 00:43:26 +0000 (0:00:00.274) 0:00:00.274 ******** 2026-04-02 00:43:48.941707 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-04-02 00:43:48.941722 | orchestrator | 2026-04-02 00:43:48.941735 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-04-02 00:43:48.941748 | orchestrator | Thursday 02 April 2026 00:43:27 +0000 (0:00:00.193) 0:00:00.468 ******** 2026-04-02 00:43:48.941761 | orchestrator | changed: [testbed-manager] 2026-04-02 00:43:48.941775 | orchestrator | 2026-04-02 00:43:48.941789 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-04-02 00:43:48.941829 | orchestrator | Thursday 02 April 2026 00:43:28 +0000 (0:00:01.406) 0:00:01.875 ******** 2026-04-02 00:43:48.941841 | orchestrator | changed: [testbed-manager] 2026-04-02 00:43:48.941850 | orchestrator | 2026-04-02 00:43:48.941858 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-04-02 00:43:48.941866 | orchestrator | Thursday 02 April 2026 00:43:38 +0000 (0:00:09.594) 0:00:11.469 ******** 2026-04-02 00:43:48.941873 | orchestrator | ok: [testbed-manager] 2026-04-02 00:43:48.941882 | orchestrator | 2026-04-02 00:43:48.941891 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-04-02 00:43:48.941899 | orchestrator | Thursday 02 April 2026 00:43:39 +0000 (0:00:01.004) 0:00:12.473 ******** 2026-04-02 00:43:48.941907 | orchestrator | changed: [testbed-manager] 2026-04-02 00:43:48.941915 | orchestrator | 2026-04-02 00:43:48.941923 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-04-02 00:43:48.941931 | orchestrator | Thursday 02 April 2026 00:43:40 +0000 (0:00:00.942) 0:00:13.416 ******** 2026-04-02 00:43:48.941939 | orchestrator | ok: [testbed-manager] 2026-04-02 00:43:48.941946 | orchestrator | 2026-04-02 00:43:48.941954 | orchestrator | TASK [osism.services.frr : Write frr_config_template to temporary file] ******** 2026-04-02 00:43:48.941962 | orchestrator | Thursday 02 April 2026 00:43:41 +0000 (0:00:01.189) 0:00:14.605 ******** 2026-04-02 00:43:48.941970 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:43:48.941978 | orchestrator | 2026-04-02 00:43:48.941986 | orchestrator | TASK [osism.services.frr : Render frr.conf from frr_config_template variable] *** 2026-04-02 00:43:48.941994 | orchestrator | Thursday 02 April 2026 00:43:41 +0000 (0:00:00.167) 0:00:14.773 ******** 2026-04-02 00:43:48.942001 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:43:48.942009 | orchestrator | 2026-04-02 00:43:48.942068 | orchestrator | TASK [osism.services.frr : Remove temporary frr_config_template file] ********** 2026-04-02 00:43:48.942077 | orchestrator | Thursday 02 April 2026 00:43:41 +0000 (0:00:00.285) 0:00:15.059 ******** 2026-04-02 00:43:48.942088 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:43:48.942103 | orchestrator | 2026-04-02 00:43:48.942112 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-04-02 00:43:48.942123 | orchestrator | Thursday 02 April 2026 00:43:41 +0000 (0:00:00.161) 0:00:15.221 ******** 2026-04-02 00:43:48.942132 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:43:48.942141 | orchestrator | 2026-04-02 00:43:48.942151 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-04-02 00:43:48.942161 | orchestrator | Thursday 02 April 2026 00:43:42 +0000 (0:00:00.137) 0:00:15.358 ******** 2026-04-02 00:43:48.942169 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:43:48.942178 | orchestrator | 2026-04-02 00:43:48.942188 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-04-02 00:43:48.942197 | orchestrator | Thursday 02 April 2026 00:43:42 +0000 (0:00:00.150) 0:00:15.508 ******** 2026-04-02 00:43:48.942206 | orchestrator | changed: [testbed-manager] 2026-04-02 00:43:48.942215 | orchestrator | 2026-04-02 00:43:48.942224 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-04-02 00:43:48.942232 | orchestrator | Thursday 02 April 2026 00:43:43 +0000 (0:00:01.007) 0:00:16.516 ******** 2026-04-02 00:43:48.942239 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-04-02 00:43:48.942247 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-04-02 00:43:48.942257 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-04-02 00:43:48.942265 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-04-02 00:43:48.942272 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-04-02 00:43:48.942281 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-04-02 00:43:48.942296 | orchestrator | 2026-04-02 00:43:48.942304 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-04-02 00:43:48.942324 | orchestrator | Thursday 02 April 2026 00:43:46 +0000 (0:00:03.106) 0:00:19.622 ******** 2026-04-02 00:43:48.942332 | orchestrator | ok: [testbed-manager] 2026-04-02 00:43:48.942340 | orchestrator | 2026-04-02 00:43:48.942348 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-04-02 00:43:48.942356 | orchestrator | Thursday 02 April 2026 00:43:47 +0000 (0:00:01.073) 0:00:20.696 ******** 2026-04-02 00:43:48.942384 | orchestrator | changed: [testbed-manager] 2026-04-02 00:43:48.942392 | orchestrator | 2026-04-02 00:43:48.942400 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-02 00:43:48.942409 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-02 00:43:48.942417 | orchestrator | 2026-04-02 00:43:48.942425 | orchestrator | 2026-04-02 00:43:48.942449 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-02 00:43:48.942457 | orchestrator | Thursday 02 April 2026 00:43:48 +0000 (0:00:01.324) 0:00:22.020 ******** 2026-04-02 00:43:48.942465 | orchestrator | =============================================================================== 2026-04-02 00:43:48.942473 | orchestrator | osism.services.frr : Install frr package -------------------------------- 9.59s 2026-04-02 00:43:48.942481 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 3.11s 2026-04-02 00:43:48.942489 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.41s 2026-04-02 00:43:48.942497 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.32s 2026-04-02 00:43:48.942505 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.19s 2026-04-02 00:43:48.942513 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.07s 2026-04-02 00:43:48.942521 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 1.01s 2026-04-02 00:43:48.942528 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.00s 2026-04-02 00:43:48.942536 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.94s 2026-04-02 00:43:48.942544 | orchestrator | osism.services.frr : Render frr.conf from frr_config_template variable --- 0.29s 2026-04-02 00:43:48.942552 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.19s 2026-04-02 00:43:48.942560 | orchestrator | osism.services.frr : Write frr_config_template to temporary file -------- 0.17s 2026-04-02 00:43:48.942568 | orchestrator | osism.services.frr : Remove temporary frr_config_template file ---------- 0.16s 2026-04-02 00:43:48.942576 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.15s 2026-04-02 00:43:48.942584 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.14s 2026-04-02 00:43:49.062238 | orchestrator | 2026-04-02 00:43:49.064773 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Thu Apr 2 00:43:49 UTC 2026 2026-04-02 00:43:49.064823 | orchestrator | 2026-04-02 00:43:50.090849 | orchestrator | 2026-04-02 00:43:50 | INFO  | Collection nutshell is prepared for execution 2026-04-02 00:43:50.189337 | orchestrator | 2026-04-02 00:43:50 | INFO  | A [0] - dotfiles 2026-04-02 00:44:00.300917 | orchestrator | 2026-04-02 00:44:00 | INFO  | A [0] - homer 2026-04-02 00:44:00.301017 | orchestrator | 2026-04-02 00:44:00 | INFO  | A [0] - netdata 2026-04-02 00:44:00.301031 | orchestrator | 2026-04-02 00:44:00 | INFO  | A [0] - openstackclient 2026-04-02 00:44:00.301042 | orchestrator | 2026-04-02 00:44:00 | INFO  | A [0] - phpmyadmin 2026-04-02 00:44:00.301053 | orchestrator | 2026-04-02 00:44:00 | INFO  | A [0] - common 2026-04-02 00:44:00.305077 | orchestrator | 2026-04-02 00:44:00 | INFO  | A [1] -- loadbalancer 2026-04-02 00:44:00.305243 | orchestrator | 2026-04-02 00:44:00 | INFO  | A [2] --- opensearch 2026-04-02 00:44:00.305302 | orchestrator | 2026-04-02 00:44:00 | INFO  | A [2] --- mariadb-ng 2026-04-02 00:44:00.305774 | orchestrator | 2026-04-02 00:44:00 | INFO  | A [3] ---- horizon 2026-04-02 00:44:00.306100 | orchestrator | 2026-04-02 00:44:00 | INFO  | A [3] ---- keystone 2026-04-02 00:44:00.306462 | orchestrator | 2026-04-02 00:44:00 | INFO  | A [4] ----- neutron 2026-04-02 00:44:00.306830 | orchestrator | 2026-04-02 00:44:00 | INFO  | A [5] ------ wait-for-nova 2026-04-02 00:44:00.307281 | orchestrator | 2026-04-02 00:44:00 | INFO  | A [6] ------- octavia 2026-04-02 00:44:00.308863 | orchestrator | 2026-04-02 00:44:00 | INFO  | A [4] ----- barbican 2026-04-02 00:44:00.308920 | orchestrator | 2026-04-02 00:44:00 | INFO  | A [4] ----- designate 2026-04-02 00:44:00.309200 | orchestrator | 2026-04-02 00:44:00 | INFO  | A [4] ----- ironic 2026-04-02 00:44:00.309538 | orchestrator | 2026-04-02 00:44:00 | INFO  | A [4] ----- placement 2026-04-02 00:44:00.309573 | orchestrator | 2026-04-02 00:44:00 | INFO  | A [4] ----- magnum 2026-04-02 00:44:00.311507 | orchestrator | 2026-04-02 00:44:00 | INFO  | A [1] -- openvswitch 2026-04-02 00:44:00.311570 | orchestrator | 2026-04-02 00:44:00 | INFO  | A [2] --- ovn 2026-04-02 00:44:00.312097 | orchestrator | 2026-04-02 00:44:00 | INFO  | A [1] -- memcached 2026-04-02 00:44:00.312131 | orchestrator | 2026-04-02 00:44:00 | INFO  | A [1] -- redis 2026-04-02 00:44:00.312406 | orchestrator | 2026-04-02 00:44:00 | INFO  | A [1] -- rabbitmq-ng 2026-04-02 00:44:00.312925 | orchestrator | 2026-04-02 00:44:00 | INFO  | A [0] - kubernetes 2026-04-02 00:44:00.315529 | orchestrator | 2026-04-02 00:44:00 | INFO  | A [1] -- kubeconfig 2026-04-02 00:44:00.315575 | orchestrator | 2026-04-02 00:44:00 | INFO  | A [1] -- copy-kubeconfig 2026-04-02 00:44:00.316106 | orchestrator | 2026-04-02 00:44:00 | INFO  | A [0] - ceph 2026-04-02 00:44:00.318532 | orchestrator | 2026-04-02 00:44:00 | INFO  | A [1] -- ceph-pools 2026-04-02 00:44:00.318622 | orchestrator | 2026-04-02 00:44:00 | INFO  | A [2] --- copy-ceph-keys 2026-04-02 00:44:00.318645 | orchestrator | 2026-04-02 00:44:00 | INFO  | A [3] ---- cephclient 2026-04-02 00:44:00.318676 | orchestrator | 2026-04-02 00:44:00 | INFO  | A [4] ----- ceph-bootstrap-dashboard 2026-04-02 00:44:00.319111 | orchestrator | 2026-04-02 00:44:00 | INFO  | A [4] ----- wait-for-keystone 2026-04-02 00:44:00.319171 | orchestrator | 2026-04-02 00:44:00 | INFO  | A [5] ------ kolla-ceph-rgw 2026-04-02 00:44:00.319443 | orchestrator | 2026-04-02 00:44:00 | INFO  | A [5] ------ glance 2026-04-02 00:44:00.319774 | orchestrator | 2026-04-02 00:44:00 | INFO  | A [5] ------ cinder 2026-04-02 00:44:00.319803 | orchestrator | 2026-04-02 00:44:00 | INFO  | A [5] ------ nova 2026-04-02 00:44:00.320477 | orchestrator | 2026-04-02 00:44:00 | INFO  | A [4] ----- prometheus 2026-04-02 00:44:00.320762 | orchestrator | 2026-04-02 00:44:00 | INFO  | A [5] ------ grafana 2026-04-02 00:44:00.488178 | orchestrator | 2026-04-02 00:44:00 | INFO  | All tasks of the collection nutshell are prepared for execution 2026-04-02 00:44:00.488269 | orchestrator | 2026-04-02 00:44:00 | INFO  | Tasks are running in the background 2026-04-02 00:44:01.980227 | orchestrator | 2026-04-02 00:44:01 | INFO  | No task IDs specified, wait for all currently running tasks 2026-04-02 00:44:04.171405 | orchestrator | 2026-04-02 00:44:04 | INFO  | Task e5e96384-59a7-4ea5-9731-55f86beb5a45 is in state STARTED 2026-04-02 00:44:04.171848 | orchestrator | 2026-04-02 00:44:04 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:44:04.172669 | orchestrator | 2026-04-02 00:44:04 | INFO  | Task 88dcd986-4c50-4893-bc70-6fd6a8862469 is in state STARTED 2026-04-02 00:44:04.175481 | orchestrator | 2026-04-02 00:44:04 | INFO  | Task 7623ae9a-acfb-4a64-9631-61cbce034143 is in state STARTED 2026-04-02 00:44:04.176154 | orchestrator | 2026-04-02 00:44:04 | INFO  | Task 72f0e25c-8ca4-4dab-bfc3-7f1deda9a5ce is in state STARTED 2026-04-02 00:44:04.178534 | orchestrator | 2026-04-02 00:44:04 | INFO  | Task 38e58049-1ce5-4398-b669-d11519540f43 is in state STARTED 2026-04-02 00:44:04.180461 | orchestrator | 2026-04-02 00:44:04 | INFO  | Task 0afab47d-018a-40a5-b1d0-91d6b4855289 is in state STARTED 2026-04-02 00:44:04.180493 | orchestrator | 2026-04-02 00:44:04 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:44:07.221231 | orchestrator | 2026-04-02 00:44:07 | INFO  | Task e5e96384-59a7-4ea5-9731-55f86beb5a45 is in state STARTED 2026-04-02 00:44:07.223854 | orchestrator | 2026-04-02 00:44:07 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:44:07.231725 | orchestrator | 2026-04-02 00:44:07 | INFO  | Task 88dcd986-4c50-4893-bc70-6fd6a8862469 is in state STARTED 2026-04-02 00:44:07.233940 | orchestrator | 2026-04-02 00:44:07 | INFO  | Task 7623ae9a-acfb-4a64-9631-61cbce034143 is in state STARTED 2026-04-02 00:44:07.236887 | orchestrator | 2026-04-02 00:44:07 | INFO  | Task 72f0e25c-8ca4-4dab-bfc3-7f1deda9a5ce is in state STARTED 2026-04-02 00:44:07.238860 | orchestrator | 2026-04-02 00:44:07 | INFO  | Task 38e58049-1ce5-4398-b669-d11519540f43 is in state STARTED 2026-04-02 00:44:07.241505 | orchestrator | 2026-04-02 00:44:07 | INFO  | Task 0afab47d-018a-40a5-b1d0-91d6b4855289 is in state STARTED 2026-04-02 00:44:07.241580 | orchestrator | 2026-04-02 00:44:07 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:44:10.272129 | orchestrator | 2026-04-02 00:44:10 | INFO  | Task e5e96384-59a7-4ea5-9731-55f86beb5a45 is in state STARTED 2026-04-02 00:44:10.272493 | orchestrator | 2026-04-02 00:44:10 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:44:10.273201 | orchestrator | 2026-04-02 00:44:10 | INFO  | Task 88dcd986-4c50-4893-bc70-6fd6a8862469 is in state STARTED 2026-04-02 00:44:10.273992 | orchestrator | 2026-04-02 00:44:10 | INFO  | Task 7623ae9a-acfb-4a64-9631-61cbce034143 is in state STARTED 2026-04-02 00:44:10.275184 | orchestrator | 2026-04-02 00:44:10 | INFO  | Task 72f0e25c-8ca4-4dab-bfc3-7f1deda9a5ce is in state STARTED 2026-04-02 00:44:10.275974 | orchestrator | 2026-04-02 00:44:10 | INFO  | Task 38e58049-1ce5-4398-b669-d11519540f43 is in state STARTED 2026-04-02 00:44:10.277644 | orchestrator | 2026-04-02 00:44:10 | INFO  | Task 0afab47d-018a-40a5-b1d0-91d6b4855289 is in state STARTED 2026-04-02 00:44:10.277665 | orchestrator | 2026-04-02 00:44:10 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:44:13.376761 | orchestrator | 2026-04-02 00:44:13 | INFO  | Task e5e96384-59a7-4ea5-9731-55f86beb5a45 is in state STARTED 2026-04-02 00:44:13.376853 | orchestrator | 2026-04-02 00:44:13 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:44:13.377900 | orchestrator | 2026-04-02 00:44:13 | INFO  | Task 88dcd986-4c50-4893-bc70-6fd6a8862469 is in state STARTED 2026-04-02 00:44:13.379861 | orchestrator | 2026-04-02 00:44:13 | INFO  | Task 7623ae9a-acfb-4a64-9631-61cbce034143 is in state STARTED 2026-04-02 00:44:13.381433 | orchestrator | 2026-04-02 00:44:13 | INFO  | Task 72f0e25c-8ca4-4dab-bfc3-7f1deda9a5ce is in state STARTED 2026-04-02 00:44:13.383078 | orchestrator | 2026-04-02 00:44:13 | INFO  | Task 38e58049-1ce5-4398-b669-d11519540f43 is in state STARTED 2026-04-02 00:44:13.385113 | orchestrator | 2026-04-02 00:44:13 | INFO  | Task 0afab47d-018a-40a5-b1d0-91d6b4855289 is in state STARTED 2026-04-02 00:44:13.385134 | orchestrator | 2026-04-02 00:44:13 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:44:16.588588 | orchestrator | 2026-04-02 00:44:16 | INFO  | Task e5e96384-59a7-4ea5-9731-55f86beb5a45 is in state STARTED 2026-04-02 00:44:16.588684 | orchestrator | 2026-04-02 00:44:16 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:44:16.591252 | orchestrator | 2026-04-02 00:44:16 | INFO  | Task 88dcd986-4c50-4893-bc70-6fd6a8862469 is in state STARTED 2026-04-02 00:44:16.592994 | orchestrator | 2026-04-02 00:44:16 | INFO  | Task 7623ae9a-acfb-4a64-9631-61cbce034143 is in state STARTED 2026-04-02 00:44:16.594623 | orchestrator | 2026-04-02 00:44:16 | INFO  | Task 72f0e25c-8ca4-4dab-bfc3-7f1deda9a5ce is in state STARTED 2026-04-02 00:44:16.596039 | orchestrator | 2026-04-02 00:44:16 | INFO  | Task 38e58049-1ce5-4398-b669-d11519540f43 is in state STARTED 2026-04-02 00:44:16.597880 | orchestrator | 2026-04-02 00:44:16 | INFO  | Task 0afab47d-018a-40a5-b1d0-91d6b4855289 is in state STARTED 2026-04-02 00:44:16.597919 | orchestrator | 2026-04-02 00:44:16 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:44:19.661152 | orchestrator | 2026-04-02 00:44:19 | INFO  | Task e5e96384-59a7-4ea5-9731-55f86beb5a45 is in state STARTED 2026-04-02 00:44:19.661259 | orchestrator | 2026-04-02 00:44:19 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:44:19.661275 | orchestrator | 2026-04-02 00:44:19 | INFO  | Task 88dcd986-4c50-4893-bc70-6fd6a8862469 is in state STARTED 2026-04-02 00:44:19.661287 | orchestrator | 2026-04-02 00:44:19 | INFO  | Task 7623ae9a-acfb-4a64-9631-61cbce034143 is in state STARTED 2026-04-02 00:44:19.661298 | orchestrator | 2026-04-02 00:44:19 | INFO  | Task 72f0e25c-8ca4-4dab-bfc3-7f1deda9a5ce is in state STARTED 2026-04-02 00:44:19.661309 | orchestrator | 2026-04-02 00:44:19 | INFO  | Task 38e58049-1ce5-4398-b669-d11519540f43 is in state STARTED 2026-04-02 00:44:19.661320 | orchestrator | 2026-04-02 00:44:19 | INFO  | Task 0afab47d-018a-40a5-b1d0-91d6b4855289 is in state STARTED 2026-04-02 00:44:19.661331 | orchestrator | 2026-04-02 00:44:19 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:44:22.773803 | orchestrator | 2026-04-02 00:44:22 | INFO  | Task e5e96384-59a7-4ea5-9731-55f86beb5a45 is in state STARTED 2026-04-02 00:44:22.773909 | orchestrator | 2026-04-02 00:44:22 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:44:22.774906 | orchestrator | 2026-04-02 00:44:22 | INFO  | Task 88dcd986-4c50-4893-bc70-6fd6a8862469 is in state STARTED 2026-04-02 00:44:22.778979 | orchestrator | 2026-04-02 00:44:22 | INFO  | Task 7623ae9a-acfb-4a64-9631-61cbce034143 is in state STARTED 2026-04-02 00:44:22.780393 | orchestrator | 2026-04-02 00:44:22 | INFO  | Task 72f0e25c-8ca4-4dab-bfc3-7f1deda9a5ce is in state STARTED 2026-04-02 00:44:22.783205 | orchestrator | 2026-04-02 00:44:22 | INFO  | Task 38e58049-1ce5-4398-b669-d11519540f43 is in state STARTED 2026-04-02 00:44:22.784241 | orchestrator | 2026-04-02 00:44:22 | INFO  | Task 0afab47d-018a-40a5-b1d0-91d6b4855289 is in state SUCCESS 2026-04-02 00:44:22.784763 | orchestrator | 2026-04-02 00:44:22.784810 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2026-04-02 00:44:22.784828 | orchestrator | 2026-04-02 00:44:22.784843 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2026-04-02 00:44:22.784862 | orchestrator | Thursday 02 April 2026 00:44:10 +0000 (0:00:00.671) 0:00:00.671 ******** 2026-04-02 00:44:22.784878 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:44:22.784944 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:44:22.784957 | orchestrator | changed: [testbed-manager] 2026-04-02 00:44:22.784966 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:44:22.784976 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:44:22.784986 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:44:22.784995 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:44:22.785005 | orchestrator | 2026-04-02 00:44:22.785015 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2026-04-02 00:44:22.785025 | orchestrator | Thursday 02 April 2026 00:44:13 +0000 (0:00:03.834) 0:00:04.506 ******** 2026-04-02 00:44:22.785035 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-04-02 00:44:22.785046 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-04-02 00:44:22.785055 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-04-02 00:44:22.785065 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-04-02 00:44:22.785075 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-04-02 00:44:22.785084 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-04-02 00:44:22.785094 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-04-02 00:44:22.785104 | orchestrator | 2026-04-02 00:44:22.785114 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2026-04-02 00:44:22.785124 | orchestrator | Thursday 02 April 2026 00:44:15 +0000 (0:00:01.408) 0:00:05.915 ******** 2026-04-02 00:44:22.785140 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-02 00:44:14.850368', 'end': '2026-04-02 00:44:14.935180', 'delta': '0:00:00.084812', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-02 00:44:22.785160 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-02 00:44:14.726689', 'end': '2026-04-02 00:44:14.733024', 'delta': '0:00:00.006335', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-02 00:44:22.785281 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-02 00:44:14.679627', 'end': '2026-04-02 00:44:14.691046', 'delta': '0:00:00.011419', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-02 00:44:22.785326 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-02 00:44:14.982272', 'end': '2026-04-02 00:44:14.992526', 'delta': '0:00:00.010254', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-02 00:44:22.785546 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-02 00:44:14.809453', 'end': '2026-04-02 00:44:14.817043', 'delta': '0:00:00.007590', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-02 00:44:22.785568 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-02 00:44:14.762085', 'end': '2026-04-02 00:44:14.771467', 'delta': '0:00:00.009382', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-02 00:44:22.785579 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-02 00:44:15.123014', 'end': '2026-04-02 00:44:15.130984', 'delta': '0:00:00.007970', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-02 00:44:22.785589 | orchestrator | 2026-04-02 00:44:22.785600 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2026-04-02 00:44:22.785610 | orchestrator | Thursday 02 April 2026 00:44:17 +0000 (0:00:02.002) 0:00:07.917 ******** 2026-04-02 00:44:22.785620 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-04-02 00:44:22.785631 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-04-02 00:44:22.785640 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-04-02 00:44:22.785650 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-04-02 00:44:22.785659 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-04-02 00:44:22.785669 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-04-02 00:44:22.785686 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-04-02 00:44:22.785696 | orchestrator | 2026-04-02 00:44:22.785706 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2026-04-02 00:44:22.785716 | orchestrator | Thursday 02 April 2026 00:44:19 +0000 (0:00:02.097) 0:00:10.015 ******** 2026-04-02 00:44:22.785726 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2026-04-02 00:44:22.785735 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2026-04-02 00:44:22.785745 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2026-04-02 00:44:22.785756 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2026-04-02 00:44:22.785765 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2026-04-02 00:44:22.785775 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2026-04-02 00:44:22.785785 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2026-04-02 00:44:22.785794 | orchestrator | 2026-04-02 00:44:22.785804 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-02 00:44:22.785829 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-02 00:44:22.785850 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-02 00:44:22.785861 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-02 00:44:22.785871 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-02 00:44:22.785880 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-02 00:44:22.785890 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-02 00:44:22.785901 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-02 00:44:22.785912 | orchestrator | 2026-04-02 00:44:22.785923 | orchestrator | 2026-04-02 00:44:22.785934 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-02 00:44:22.785946 | orchestrator | Thursday 02 April 2026 00:44:21 +0000 (0:00:02.598) 0:00:12.614 ******** 2026-04-02 00:44:22.785957 | orchestrator | =============================================================================== 2026-04-02 00:44:22.785968 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 3.83s 2026-04-02 00:44:22.785980 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 2.60s 2026-04-02 00:44:22.785992 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 2.10s 2026-04-02 00:44:22.786003 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.00s 2026-04-02 00:44:22.786014 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 1.41s 2026-04-02 00:44:22.786088 | orchestrator | 2026-04-02 00:44:22 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:44:25.838070 | orchestrator | 2026-04-02 00:44:25 | INFO  | Task e5e96384-59a7-4ea5-9731-55f86beb5a45 is in state STARTED 2026-04-02 00:44:25.842885 | orchestrator | 2026-04-02 00:44:25 | INFO  | Task b630af42-1996-49ce-b498-b2dda15cede5 is in state STARTED 2026-04-02 00:44:25.842953 | orchestrator | 2026-04-02 00:44:25 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:44:25.842958 | orchestrator | 2026-04-02 00:44:25 | INFO  | Task 88dcd986-4c50-4893-bc70-6fd6a8862469 is in state STARTED 2026-04-02 00:44:25.847244 | orchestrator | 2026-04-02 00:44:25 | INFO  | Task 7623ae9a-acfb-4a64-9631-61cbce034143 is in state STARTED 2026-04-02 00:44:25.849703 | orchestrator | 2026-04-02 00:44:25 | INFO  | Task 72f0e25c-8ca4-4dab-bfc3-7f1deda9a5ce is in state STARTED 2026-04-02 00:44:25.853032 | orchestrator | 2026-04-02 00:44:25 | INFO  | Task 38e58049-1ce5-4398-b669-d11519540f43 is in state STARTED 2026-04-02 00:44:25.854778 | orchestrator | 2026-04-02 00:44:25 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:44:28.954603 | orchestrator | 2026-04-02 00:44:28 | INFO  | Task e5e96384-59a7-4ea5-9731-55f86beb5a45 is in state STARTED 2026-04-02 00:44:28.954676 | orchestrator | 2026-04-02 00:44:28 | INFO  | Task b630af42-1996-49ce-b498-b2dda15cede5 is in state STARTED 2026-04-02 00:44:28.954682 | orchestrator | 2026-04-02 00:44:28 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:44:28.954687 | orchestrator | 2026-04-02 00:44:28 | INFO  | Task 88dcd986-4c50-4893-bc70-6fd6a8862469 is in state STARTED 2026-04-02 00:44:28.954691 | orchestrator | 2026-04-02 00:44:28 | INFO  | Task 7623ae9a-acfb-4a64-9631-61cbce034143 is in state STARTED 2026-04-02 00:44:28.954695 | orchestrator | 2026-04-02 00:44:28 | INFO  | Task 72f0e25c-8ca4-4dab-bfc3-7f1deda9a5ce is in state STARTED 2026-04-02 00:44:28.954699 | orchestrator | 2026-04-02 00:44:28 | INFO  | Task 38e58049-1ce5-4398-b669-d11519540f43 is in state STARTED 2026-04-02 00:44:28.954704 | orchestrator | 2026-04-02 00:44:28 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:44:31.943987 | orchestrator | 2026-04-02 00:44:31 | INFO  | Task e5e96384-59a7-4ea5-9731-55f86beb5a45 is in state STARTED 2026-04-02 00:44:31.944058 | orchestrator | 2026-04-02 00:44:31 | INFO  | Task b630af42-1996-49ce-b498-b2dda15cede5 is in state STARTED 2026-04-02 00:44:31.944064 | orchestrator | 2026-04-02 00:44:31 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:44:31.944069 | orchestrator | 2026-04-02 00:44:31 | INFO  | Task 88dcd986-4c50-4893-bc70-6fd6a8862469 is in state STARTED 2026-04-02 00:44:31.944073 | orchestrator | 2026-04-02 00:44:31 | INFO  | Task 7623ae9a-acfb-4a64-9631-61cbce034143 is in state STARTED 2026-04-02 00:44:31.944091 | orchestrator | 2026-04-02 00:44:31 | INFO  | Task 72f0e25c-8ca4-4dab-bfc3-7f1deda9a5ce is in state STARTED 2026-04-02 00:44:31.944095 | orchestrator | 2026-04-02 00:44:31 | INFO  | Task 38e58049-1ce5-4398-b669-d11519540f43 is in state STARTED 2026-04-02 00:44:31.944100 | orchestrator | 2026-04-02 00:44:31 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:44:34.992131 | orchestrator | 2026-04-02 00:44:34 | INFO  | Task e5e96384-59a7-4ea5-9731-55f86beb5a45 is in state STARTED 2026-04-02 00:44:34.992380 | orchestrator | 2026-04-02 00:44:34 | INFO  | Task b630af42-1996-49ce-b498-b2dda15cede5 is in state STARTED 2026-04-02 00:44:34.992417 | orchestrator | 2026-04-02 00:44:34 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:44:34.994248 | orchestrator | 2026-04-02 00:44:34 | INFO  | Task 88dcd986-4c50-4893-bc70-6fd6a8862469 is in state STARTED 2026-04-02 00:44:34.994369 | orchestrator | 2026-04-02 00:44:34 | INFO  | Task 7623ae9a-acfb-4a64-9631-61cbce034143 is in state STARTED 2026-04-02 00:44:34.995473 | orchestrator | 2026-04-02 00:44:34 | INFO  | Task 72f0e25c-8ca4-4dab-bfc3-7f1deda9a5ce is in state STARTED 2026-04-02 00:44:34.995540 | orchestrator | 2026-04-02 00:44:34 | INFO  | Task 38e58049-1ce5-4398-b669-d11519540f43 is in state STARTED 2026-04-02 00:44:34.995560 | orchestrator | 2026-04-02 00:44:34 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:44:38.039272 | orchestrator | 2026-04-02 00:44:38 | INFO  | Task e5e96384-59a7-4ea5-9731-55f86beb5a45 is in state STARTED 2026-04-02 00:44:38.039500 | orchestrator | 2026-04-02 00:44:38 | INFO  | Task b630af42-1996-49ce-b498-b2dda15cede5 is in state STARTED 2026-04-02 00:44:38.046090 | orchestrator | 2026-04-02 00:44:38 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:44:38.046153 | orchestrator | 2026-04-02 00:44:38 | INFO  | Task 88dcd986-4c50-4893-bc70-6fd6a8862469 is in state STARTED 2026-04-02 00:44:38.046164 | orchestrator | 2026-04-02 00:44:38 | INFO  | Task 7623ae9a-acfb-4a64-9631-61cbce034143 is in state STARTED 2026-04-02 00:44:38.046175 | orchestrator | 2026-04-02 00:44:38 | INFO  | Task 72f0e25c-8ca4-4dab-bfc3-7f1deda9a5ce is in state STARTED 2026-04-02 00:44:38.046184 | orchestrator | 2026-04-02 00:44:38 | INFO  | Task 38e58049-1ce5-4398-b669-d11519540f43 is in state STARTED 2026-04-02 00:44:38.046195 | orchestrator | 2026-04-02 00:44:38 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:44:41.087929 | orchestrator | 2026-04-02 00:44:41 | INFO  | Task e5e96384-59a7-4ea5-9731-55f86beb5a45 is in state STARTED 2026-04-02 00:44:41.088446 | orchestrator | 2026-04-02 00:44:41 | INFO  | Task b630af42-1996-49ce-b498-b2dda15cede5 is in state STARTED 2026-04-02 00:44:41.090008 | orchestrator | 2026-04-02 00:44:41 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:44:41.092116 | orchestrator | 2026-04-02 00:44:41 | INFO  | Task 88dcd986-4c50-4893-bc70-6fd6a8862469 is in state STARTED 2026-04-02 00:44:41.093273 | orchestrator | 2026-04-02 00:44:41 | INFO  | Task 7623ae9a-acfb-4a64-9631-61cbce034143 is in state STARTED 2026-04-02 00:44:41.094880 | orchestrator | 2026-04-02 00:44:41 | INFO  | Task 72f0e25c-8ca4-4dab-bfc3-7f1deda9a5ce is in state STARTED 2026-04-02 00:44:41.097270 | orchestrator | 2026-04-02 00:44:41 | INFO  | Task 38e58049-1ce5-4398-b669-d11519540f43 is in state STARTED 2026-04-02 00:44:41.097393 | orchestrator | 2026-04-02 00:44:41 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:44:44.137161 | orchestrator | 2026-04-02 00:44:44 | INFO  | Task e5e96384-59a7-4ea5-9731-55f86beb5a45 is in state STARTED 2026-04-02 00:44:44.138713 | orchestrator | 2026-04-02 00:44:44 | INFO  | Task b630af42-1996-49ce-b498-b2dda15cede5 is in state STARTED 2026-04-02 00:44:44.138771 | orchestrator | 2026-04-02 00:44:44 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:44:44.139387 | orchestrator | 2026-04-02 00:44:44 | INFO  | Task 88dcd986-4c50-4893-bc70-6fd6a8862469 is in state STARTED 2026-04-02 00:44:44.140217 | orchestrator | 2026-04-02 00:44:44 | INFO  | Task 7623ae9a-acfb-4a64-9631-61cbce034143 is in state STARTED 2026-04-02 00:44:44.140882 | orchestrator | 2026-04-02 00:44:44 | INFO  | Task 72f0e25c-8ca4-4dab-bfc3-7f1deda9a5ce is in state STARTED 2026-04-02 00:44:44.141365 | orchestrator | 2026-04-02 00:44:44 | INFO  | Task 38e58049-1ce5-4398-b669-d11519540f43 is in state STARTED 2026-04-02 00:44:44.141436 | orchestrator | 2026-04-02 00:44:44 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:44:47.281461 | orchestrator | 2026-04-02 00:44:47 | INFO  | Task e5e96384-59a7-4ea5-9731-55f86beb5a45 is in state SUCCESS 2026-04-02 00:44:47.281525 | orchestrator | 2026-04-02 00:44:47 | INFO  | Task b630af42-1996-49ce-b498-b2dda15cede5 is in state STARTED 2026-04-02 00:44:47.281706 | orchestrator | 2026-04-02 00:44:47 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:44:47.283399 | orchestrator | 2026-04-02 00:44:47 | INFO  | Task 88dcd986-4c50-4893-bc70-6fd6a8862469 is in state STARTED 2026-04-02 00:44:47.285097 | orchestrator | 2026-04-02 00:44:47 | INFO  | Task 7623ae9a-acfb-4a64-9631-61cbce034143 is in state STARTED 2026-04-02 00:44:47.286232 | orchestrator | 2026-04-02 00:44:47 | INFO  | Task 72f0e25c-8ca4-4dab-bfc3-7f1deda9a5ce is in state STARTED 2026-04-02 00:44:47.300407 | orchestrator | 2026-04-02 00:44:47 | INFO  | Task 38e58049-1ce5-4398-b669-d11519540f43 is in state STARTED 2026-04-02 00:44:47.300496 | orchestrator | 2026-04-02 00:44:47 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:44:50.379106 | orchestrator | 2026-04-02 00:44:50 | INFO  | Task b630af42-1996-49ce-b498-b2dda15cede5 is in state STARTED 2026-04-02 00:44:50.379178 | orchestrator | 2026-04-02 00:44:50 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:44:50.379189 | orchestrator | 2026-04-02 00:44:50 | INFO  | Task 88dcd986-4c50-4893-bc70-6fd6a8862469 is in state STARTED 2026-04-02 00:44:50.379197 | orchestrator | 2026-04-02 00:44:50 | INFO  | Task 7623ae9a-acfb-4a64-9631-61cbce034143 is in state STARTED 2026-04-02 00:44:50.379206 | orchestrator | 2026-04-02 00:44:50 | INFO  | Task 72f0e25c-8ca4-4dab-bfc3-7f1deda9a5ce is in state STARTED 2026-04-02 00:44:50.379214 | orchestrator | 2026-04-02 00:44:50 | INFO  | Task 38e58049-1ce5-4398-b669-d11519540f43 is in state STARTED 2026-04-02 00:44:50.379222 | orchestrator | 2026-04-02 00:44:50 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:44:53.460580 | orchestrator | 2026-04-02 00:44:53 | INFO  | Task b630af42-1996-49ce-b498-b2dda15cede5 is in state STARTED 2026-04-02 00:44:53.461469 | orchestrator | 2026-04-02 00:44:53 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:44:53.461987 | orchestrator | 2026-04-02 00:44:53 | INFO  | Task 88dcd986-4c50-4893-bc70-6fd6a8862469 is in state STARTED 2026-04-02 00:44:53.463227 | orchestrator | 2026-04-02 00:44:53 | INFO  | Task 7623ae9a-acfb-4a64-9631-61cbce034143 is in state STARTED 2026-04-02 00:44:53.464352 | orchestrator | 2026-04-02 00:44:53 | INFO  | Task 72f0e25c-8ca4-4dab-bfc3-7f1deda9a5ce is in state STARTED 2026-04-02 00:44:53.464389 | orchestrator | 2026-04-02 00:44:53 | INFO  | Task 38e58049-1ce5-4398-b669-d11519540f43 is in state STARTED 2026-04-02 00:44:53.465355 | orchestrator | 2026-04-02 00:44:53 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:44:56.521998 | orchestrator | 2026-04-02 00:44:56 | INFO  | Task b630af42-1996-49ce-b498-b2dda15cede5 is in state STARTED 2026-04-02 00:44:56.524391 | orchestrator | 2026-04-02 00:44:56 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:44:56.531067 | orchestrator | 2026-04-02 00:44:56 | INFO  | Task 88dcd986-4c50-4893-bc70-6fd6a8862469 is in state STARTED 2026-04-02 00:44:56.532047 | orchestrator | 2026-04-02 00:44:56 | INFO  | Task 7623ae9a-acfb-4a64-9631-61cbce034143 is in state STARTED 2026-04-02 00:44:56.532643 | orchestrator | 2026-04-02 00:44:56 | INFO  | Task 72f0e25c-8ca4-4dab-bfc3-7f1deda9a5ce is in state STARTED 2026-04-02 00:44:56.532978 | orchestrator | 2026-04-02 00:44:56 | INFO  | Task 38e58049-1ce5-4398-b669-d11519540f43 is in state STARTED 2026-04-02 00:44:56.532994 | orchestrator | 2026-04-02 00:44:56 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:44:59.566297 | orchestrator | 2026-04-02 00:44:59 | INFO  | Task b630af42-1996-49ce-b498-b2dda15cede5 is in state STARTED 2026-04-02 00:44:59.566436 | orchestrator | 2026-04-02 00:44:59 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:44:59.566448 | orchestrator | 2026-04-02 00:44:59 | INFO  | Task 88dcd986-4c50-4893-bc70-6fd6a8862469 is in state SUCCESS 2026-04-02 00:44:59.566456 | orchestrator | 2026-04-02 00:44:59 | INFO  | Task 7623ae9a-acfb-4a64-9631-61cbce034143 is in state STARTED 2026-04-02 00:44:59.566505 | orchestrator | 2026-04-02 00:44:59 | INFO  | Task 72f0e25c-8ca4-4dab-bfc3-7f1deda9a5ce is in state STARTED 2026-04-02 00:44:59.567843 | orchestrator | 2026-04-02 00:44:59 | INFO  | Task 38e58049-1ce5-4398-b669-d11519540f43 is in state STARTED 2026-04-02 00:44:59.567880 | orchestrator | 2026-04-02 00:44:59 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:45:02.728873 | orchestrator | 2026-04-02 00:45:02 | INFO  | Task b630af42-1996-49ce-b498-b2dda15cede5 is in state STARTED 2026-04-02 00:45:02.728950 | orchestrator | 2026-04-02 00:45:02 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:45:02.728956 | orchestrator | 2026-04-02 00:45:02 | INFO  | Task 7623ae9a-acfb-4a64-9631-61cbce034143 is in state STARTED 2026-04-02 00:45:02.728960 | orchestrator | 2026-04-02 00:45:02 | INFO  | Task 72f0e25c-8ca4-4dab-bfc3-7f1deda9a5ce is in state STARTED 2026-04-02 00:45:02.728965 | orchestrator | 2026-04-02 00:45:02 | INFO  | Task 38e58049-1ce5-4398-b669-d11519540f43 is in state STARTED 2026-04-02 00:45:02.728970 | orchestrator | 2026-04-02 00:45:02 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:45:05.644557 | orchestrator | 2026-04-02 00:45:05 | INFO  | Task b630af42-1996-49ce-b498-b2dda15cede5 is in state STARTED 2026-04-02 00:45:05.646465 | orchestrator | 2026-04-02 00:45:05 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:45:05.648279 | orchestrator | 2026-04-02 00:45:05 | INFO  | Task 7623ae9a-acfb-4a64-9631-61cbce034143 is in state STARTED 2026-04-02 00:45:05.649822 | orchestrator | 2026-04-02 00:45:05 | INFO  | Task 72f0e25c-8ca4-4dab-bfc3-7f1deda9a5ce is in state STARTED 2026-04-02 00:45:05.650958 | orchestrator | 2026-04-02 00:45:05 | INFO  | Task 38e58049-1ce5-4398-b669-d11519540f43 is in state STARTED 2026-04-02 00:45:05.651010 | orchestrator | 2026-04-02 00:45:05 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:45:08.691400 | orchestrator | 2026-04-02 00:45:08 | INFO  | Task b630af42-1996-49ce-b498-b2dda15cede5 is in state STARTED 2026-04-02 00:45:08.696176 | orchestrator | 2026-04-02 00:45:08 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:45:08.700468 | orchestrator | 2026-04-02 00:45:08 | INFO  | Task 7623ae9a-acfb-4a64-9631-61cbce034143 is in state STARTED 2026-04-02 00:45:08.707502 | orchestrator | 2026-04-02 00:45:08 | INFO  | Task 72f0e25c-8ca4-4dab-bfc3-7f1deda9a5ce is in state STARTED 2026-04-02 00:45:08.709192 | orchestrator | 2026-04-02 00:45:08 | INFO  | Task 38e58049-1ce5-4398-b669-d11519540f43 is in state STARTED 2026-04-02 00:45:08.709739 | orchestrator | 2026-04-02 00:45:08 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:45:11.752230 | orchestrator | 2026-04-02 00:45:11 | INFO  | Task b630af42-1996-49ce-b498-b2dda15cede5 is in state STARTED 2026-04-02 00:45:11.752706 | orchestrator | 2026-04-02 00:45:11 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:45:11.753825 | orchestrator | 2026-04-02 00:45:11 | INFO  | Task 7623ae9a-acfb-4a64-9631-61cbce034143 is in state STARTED 2026-04-02 00:45:11.754585 | orchestrator | 2026-04-02 00:45:11 | INFO  | Task 72f0e25c-8ca4-4dab-bfc3-7f1deda9a5ce is in state STARTED 2026-04-02 00:45:11.755212 | orchestrator | 2026-04-02 00:45:11 | INFO  | Task 38e58049-1ce5-4398-b669-d11519540f43 is in state STARTED 2026-04-02 00:45:11.755238 | orchestrator | 2026-04-02 00:45:11 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:45:14.784888 | orchestrator | 2026-04-02 00:45:14 | INFO  | Task b630af42-1996-49ce-b498-b2dda15cede5 is in state STARTED 2026-04-02 00:45:14.785365 | orchestrator | 2026-04-02 00:45:14 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:45:14.786435 | orchestrator | 2026-04-02 00:45:14 | INFO  | Task 7623ae9a-acfb-4a64-9631-61cbce034143 is in state STARTED 2026-04-02 00:45:14.787653 | orchestrator | 2026-04-02 00:45:14 | INFO  | Task 72f0e25c-8ca4-4dab-bfc3-7f1deda9a5ce is in state STARTED 2026-04-02 00:45:14.788636 | orchestrator | 2026-04-02 00:45:14 | INFO  | Task 38e58049-1ce5-4398-b669-d11519540f43 is in state STARTED 2026-04-02 00:45:14.788797 | orchestrator | 2026-04-02 00:45:14 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:45:17.830690 | orchestrator | 2026-04-02 00:45:17 | INFO  | Task b630af42-1996-49ce-b498-b2dda15cede5 is in state STARTED 2026-04-02 00:45:17.831675 | orchestrator | 2026-04-02 00:45:17 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:45:17.832510 | orchestrator | 2026-04-02 00:45:17 | INFO  | Task 7623ae9a-acfb-4a64-9631-61cbce034143 is in state STARTED 2026-04-02 00:45:17.834159 | orchestrator | 2026-04-02 00:45:17 | INFO  | Task 72f0e25c-8ca4-4dab-bfc3-7f1deda9a5ce is in state STARTED 2026-04-02 00:45:17.834783 | orchestrator | 2026-04-02 00:45:17 | INFO  | Task 38e58049-1ce5-4398-b669-d11519540f43 is in state STARTED 2026-04-02 00:45:17.834803 | orchestrator | 2026-04-02 00:45:17 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:45:20.950627 | orchestrator | 2026-04-02 00:45:20 | INFO  | Task b630af42-1996-49ce-b498-b2dda15cede5 is in state STARTED 2026-04-02 00:45:20.952160 | orchestrator | 2026-04-02 00:45:20 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:45:20.952546 | orchestrator | 2026-04-02 00:45:20 | INFO  | Task 7623ae9a-acfb-4a64-9631-61cbce034143 is in state STARTED 2026-04-02 00:45:20.953641 | orchestrator | 2026-04-02 00:45:20 | INFO  | Task 72f0e25c-8ca4-4dab-bfc3-7f1deda9a5ce is in state STARTED 2026-04-02 00:45:20.955359 | orchestrator | 2026-04-02 00:45:20 | INFO  | Task 38e58049-1ce5-4398-b669-d11519540f43 is in state STARTED 2026-04-02 00:45:20.955395 | orchestrator | 2026-04-02 00:45:20 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:45:23.998817 | orchestrator | 2026-04-02 00:45:23 | INFO  | Task b630af42-1996-49ce-b498-b2dda15cede5 is in state STARTED 2026-04-02 00:45:23.998883 | orchestrator | 2026-04-02 00:45:23 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:45:24.000101 | orchestrator | 2026-04-02 00:45:24 | INFO  | Task 7623ae9a-acfb-4a64-9631-61cbce034143 is in state STARTED 2026-04-02 00:45:24.003211 | orchestrator | 2026-04-02 00:45:24 | INFO  | Task 72f0e25c-8ca4-4dab-bfc3-7f1deda9a5ce is in state STARTED 2026-04-02 00:45:24.006250 | orchestrator | 2026-04-02 00:45:24 | INFO  | Task 38e58049-1ce5-4398-b669-d11519540f43 is in state STARTED 2026-04-02 00:45:24.006406 | orchestrator | 2026-04-02 00:45:24 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:45:27.067510 | orchestrator | 2026-04-02 00:45:27 | INFO  | Task b630af42-1996-49ce-b498-b2dda15cede5 is in state STARTED 2026-04-02 00:45:27.067571 | orchestrator | 2026-04-02 00:45:27 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:45:27.067581 | orchestrator | 2026-04-02 00:45:27 | INFO  | Task 7623ae9a-acfb-4a64-9631-61cbce034143 is in state STARTED 2026-04-02 00:45:27.067589 | orchestrator | 2026-04-02 00:45:27 | INFO  | Task 72f0e25c-8ca4-4dab-bfc3-7f1deda9a5ce is in state STARTED 2026-04-02 00:45:27.067596 | orchestrator | 2026-04-02 00:45:27 | INFO  | Task 38e58049-1ce5-4398-b669-d11519540f43 is in state STARTED 2026-04-02 00:45:27.067618 | orchestrator | 2026-04-02 00:45:27 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:45:30.091692 | orchestrator | 2026-04-02 00:45:30 | INFO  | Task b630af42-1996-49ce-b498-b2dda15cede5 is in state STARTED 2026-04-02 00:45:30.091739 | orchestrator | 2026-04-02 00:45:30 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:45:30.091744 | orchestrator | 2026-04-02 00:45:30 | INFO  | Task 7623ae9a-acfb-4a64-9631-61cbce034143 is in state STARTED 2026-04-02 00:45:30.091748 | orchestrator | 2026-04-02 00:45:30 | INFO  | Task 72f0e25c-8ca4-4dab-bfc3-7f1deda9a5ce is in state STARTED 2026-04-02 00:45:30.091752 | orchestrator | 2026-04-02 00:45:30 | INFO  | Task 38e58049-1ce5-4398-b669-d11519540f43 is in state STARTED 2026-04-02 00:45:30.091756 | orchestrator | 2026-04-02 00:45:30 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:45:33.121936 | orchestrator | 2026-04-02 00:45:33 | INFO  | Task b630af42-1996-49ce-b498-b2dda15cede5 is in state STARTED 2026-04-02 00:45:33.122119 | orchestrator | 2026-04-02 00:45:33 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:45:33.124123 | orchestrator | 2026-04-02 00:45:33 | INFO  | Task 7623ae9a-acfb-4a64-9631-61cbce034143 is in state STARTED 2026-04-02 00:45:33.125944 | orchestrator | 2026-04-02 00:45:33 | INFO  | Task 72f0e25c-8ca4-4dab-bfc3-7f1deda9a5ce is in state STARTED 2026-04-02 00:45:33.126418 | orchestrator | 2026-04-02 00:45:33 | INFO  | Task 38e58049-1ce5-4398-b669-d11519540f43 is in state STARTED 2026-04-02 00:45:33.126448 | orchestrator | 2026-04-02 00:45:33 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:45:36.178164 | orchestrator | 2026-04-02 00:45:36 | INFO  | Task b630af42-1996-49ce-b498-b2dda15cede5 is in state STARTED 2026-04-02 00:45:36.183392 | orchestrator | 2026-04-02 00:45:36 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:45:36.183787 | orchestrator | 2026-04-02 00:45:36 | INFO  | Task 7623ae9a-acfb-4a64-9631-61cbce034143 is in state STARTED 2026-04-02 00:45:36.183804 | orchestrator | 2026-04-02 00:45:36 | INFO  | Task 72f0e25c-8ca4-4dab-bfc3-7f1deda9a5ce is in state STARTED 2026-04-02 00:45:36.184052 | orchestrator | 2026-04-02 00:45:36 | INFO  | Task 38e58049-1ce5-4398-b669-d11519540f43 is in state STARTED 2026-04-02 00:45:36.185044 | orchestrator | 2026-04-02 00:45:36 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:45:39.232973 | orchestrator | 2026-04-02 00:45:39.233029 | orchestrator | 2026-04-02 00:45:39.233037 | orchestrator | PLAY [Apply role homer] ******************************************************** 2026-04-02 00:45:39.233044 | orchestrator | 2026-04-02 00:45:39.233050 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2026-04-02 00:45:39.233056 | orchestrator | Thursday 02 April 2026 00:44:10 +0000 (0:00:00.428) 0:00:00.428 ******** 2026-04-02 00:45:39.233062 | orchestrator | ok: [testbed-manager] => { 2026-04-02 00:45:39.233069 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2026-04-02 00:45:39.233076 | orchestrator | } 2026-04-02 00:45:39.233082 | orchestrator | 2026-04-02 00:45:39.233087 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2026-04-02 00:45:39.233093 | orchestrator | Thursday 02 April 2026 00:44:10 +0000 (0:00:00.356) 0:00:00.785 ******** 2026-04-02 00:45:39.233098 | orchestrator | ok: [testbed-manager] 2026-04-02 00:45:39.233105 | orchestrator | 2026-04-02 00:45:39.233110 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2026-04-02 00:45:39.233115 | orchestrator | Thursday 02 April 2026 00:44:12 +0000 (0:00:02.470) 0:00:03.256 ******** 2026-04-02 00:45:39.233120 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2026-04-02 00:45:39.233140 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2026-04-02 00:45:39.233146 | orchestrator | 2026-04-02 00:45:39.233151 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2026-04-02 00:45:39.233156 | orchestrator | Thursday 02 April 2026 00:44:14 +0000 (0:00:01.676) 0:00:04.932 ******** 2026-04-02 00:45:39.233161 | orchestrator | changed: [testbed-manager] 2026-04-02 00:45:39.233167 | orchestrator | 2026-04-02 00:45:39.233172 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2026-04-02 00:45:39.233178 | orchestrator | Thursday 02 April 2026 00:44:16 +0000 (0:00:01.999) 0:00:06.932 ******** 2026-04-02 00:45:39.233191 | orchestrator | changed: [testbed-manager] 2026-04-02 00:45:39.233197 | orchestrator | 2026-04-02 00:45:39.233202 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2026-04-02 00:45:39.233208 | orchestrator | Thursday 02 April 2026 00:44:18 +0000 (0:00:01.842) 0:00:08.774 ******** 2026-04-02 00:45:39.233213 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2026-04-02 00:45:39.233218 | orchestrator | ok: [testbed-manager] 2026-04-02 00:45:39.233224 | orchestrator | 2026-04-02 00:45:39.233229 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2026-04-02 00:45:39.233234 | orchestrator | Thursday 02 April 2026 00:44:43 +0000 (0:00:25.571) 0:00:34.346 ******** 2026-04-02 00:45:39.233239 | orchestrator | changed: [testbed-manager] 2026-04-02 00:45:39.233244 | orchestrator | 2026-04-02 00:45:39.233249 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-02 00:45:39.233255 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-02 00:45:39.233261 | orchestrator | 2026-04-02 00:45:39.233266 | orchestrator | 2026-04-02 00:45:39.233271 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-02 00:45:39.233277 | orchestrator | Thursday 02 April 2026 00:44:46 +0000 (0:00:03.067) 0:00:37.414 ******** 2026-04-02 00:45:39.233282 | orchestrator | =============================================================================== 2026-04-02 00:45:39.233287 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 25.57s 2026-04-02 00:45:39.233318 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 3.07s 2026-04-02 00:45:39.233324 | orchestrator | osism.services.homer : Create traefik external network ------------------ 2.47s 2026-04-02 00:45:39.233330 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.00s 2026-04-02 00:45:39.233336 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.84s 2026-04-02 00:45:39.233341 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.68s 2026-04-02 00:45:39.233346 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.36s 2026-04-02 00:45:39.233351 | orchestrator | 2026-04-02 00:45:39.233356 | orchestrator | 2026-04-02 00:45:39.233362 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-04-02 00:45:39.233366 | orchestrator | 2026-04-02 00:45:39.233372 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-04-02 00:45:39.233377 | orchestrator | Thursday 02 April 2026 00:44:09 +0000 (0:00:00.537) 0:00:00.537 ******** 2026-04-02 00:45:39.233383 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-04-02 00:45:39.233389 | orchestrator | 2026-04-02 00:45:39.233395 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-04-02 00:45:39.233400 | orchestrator | Thursday 02 April 2026 00:44:10 +0000 (0:00:00.553) 0:00:01.091 ******** 2026-04-02 00:45:39.233405 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-04-02 00:45:39.233410 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-04-02 00:45:39.233416 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-04-02 00:45:39.233428 | orchestrator | 2026-04-02 00:45:39.233433 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-04-02 00:45:39.233439 | orchestrator | Thursday 02 April 2026 00:44:13 +0000 (0:00:02.716) 0:00:03.808 ******** 2026-04-02 00:45:39.233444 | orchestrator | changed: [testbed-manager] 2026-04-02 00:45:39.233449 | orchestrator | 2026-04-02 00:45:39.233454 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-04-02 00:45:39.233460 | orchestrator | Thursday 02 April 2026 00:44:16 +0000 (0:00:03.004) 0:00:06.812 ******** 2026-04-02 00:45:39.233475 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-04-02 00:45:39.233481 | orchestrator | ok: [testbed-manager] 2026-04-02 00:45:39.233486 | orchestrator | 2026-04-02 00:45:39.233491 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-04-02 00:45:39.233497 | orchestrator | Thursday 02 April 2026 00:44:50 +0000 (0:00:34.494) 0:00:41.307 ******** 2026-04-02 00:45:39.233502 | orchestrator | changed: [testbed-manager] 2026-04-02 00:45:39.233513 | orchestrator | 2026-04-02 00:45:39.233518 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-04-02 00:45:39.233523 | orchestrator | Thursday 02 April 2026 00:44:51 +0000 (0:00:01.123) 0:00:42.430 ******** 2026-04-02 00:45:39.233529 | orchestrator | ok: [testbed-manager] 2026-04-02 00:45:39.233534 | orchestrator | 2026-04-02 00:45:39.233539 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-04-02 00:45:39.233545 | orchestrator | Thursday 02 April 2026 00:44:53 +0000 (0:00:01.487) 0:00:43.918 ******** 2026-04-02 00:45:39.233550 | orchestrator | changed: [testbed-manager] 2026-04-02 00:45:39.233555 | orchestrator | 2026-04-02 00:45:39.233561 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-04-02 00:45:39.233567 | orchestrator | Thursday 02 April 2026 00:44:56 +0000 (0:00:02.778) 0:00:46.696 ******** 2026-04-02 00:45:39.233572 | orchestrator | changed: [testbed-manager] 2026-04-02 00:45:39.233577 | orchestrator | 2026-04-02 00:45:39.233583 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-04-02 00:45:39.233588 | orchestrator | Thursday 02 April 2026 00:44:57 +0000 (0:00:01.139) 0:00:47.836 ******** 2026-04-02 00:45:39.233593 | orchestrator | changed: [testbed-manager] 2026-04-02 00:45:39.233599 | orchestrator | 2026-04-02 00:45:39.233604 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-04-02 00:45:39.233610 | orchestrator | Thursday 02 April 2026 00:44:57 +0000 (0:00:00.741) 0:00:48.578 ******** 2026-04-02 00:45:39.233616 | orchestrator | ok: [testbed-manager] 2026-04-02 00:45:39.233623 | orchestrator | 2026-04-02 00:45:39.233637 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-02 00:45:39.233643 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-02 00:45:39.233648 | orchestrator | 2026-04-02 00:45:39.233654 | orchestrator | 2026-04-02 00:45:39.233660 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-02 00:45:39.233665 | orchestrator | Thursday 02 April 2026 00:44:58 +0000 (0:00:00.395) 0:00:48.973 ******** 2026-04-02 00:45:39.233671 | orchestrator | =============================================================================== 2026-04-02 00:45:39.233677 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 34.49s 2026-04-02 00:45:39.233683 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 3.00s 2026-04-02 00:45:39.233689 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 2.78s 2026-04-02 00:45:39.233695 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.72s 2026-04-02 00:45:39.233700 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.49s 2026-04-02 00:45:39.233707 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.14s 2026-04-02 00:45:39.233717 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.12s 2026-04-02 00:45:39.233723 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.74s 2026-04-02 00:45:39.233729 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.55s 2026-04-02 00:45:39.233735 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.40s 2026-04-02 00:45:39.233742 | orchestrator | 2026-04-02 00:45:39.233748 | orchestrator | 2026-04-02 00:45:39.233755 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2026-04-02 00:45:39.233761 | orchestrator | 2026-04-02 00:45:39.233768 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2026-04-02 00:45:39.233776 | orchestrator | Thursday 02 April 2026 00:44:27 +0000 (0:00:00.293) 0:00:00.293 ******** 2026-04-02 00:45:39.233782 | orchestrator | ok: [testbed-manager] 2026-04-02 00:45:39.233789 | orchestrator | 2026-04-02 00:45:39.233794 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2026-04-02 00:45:39.233800 | orchestrator | Thursday 02 April 2026 00:44:30 +0000 (0:00:02.225) 0:00:02.519 ******** 2026-04-02 00:45:39.233806 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2026-04-02 00:45:39.233812 | orchestrator | 2026-04-02 00:45:39.233819 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2026-04-02 00:45:39.233824 | orchestrator | Thursday 02 April 2026 00:44:30 +0000 (0:00:00.703) 0:00:03.223 ******** 2026-04-02 00:45:39.233831 | orchestrator | changed: [testbed-manager] 2026-04-02 00:45:39.233837 | orchestrator | 2026-04-02 00:45:39.233843 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2026-04-02 00:45:39.233850 | orchestrator | Thursday 02 April 2026 00:44:32 +0000 (0:00:01.288) 0:00:04.511 ******** 2026-04-02 00:45:39.233856 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2026-04-02 00:45:39.233863 | orchestrator | ok: [testbed-manager] 2026-04-02 00:45:39.233869 | orchestrator | 2026-04-02 00:45:39.233875 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2026-04-02 00:45:39.233881 | orchestrator | Thursday 02 April 2026 00:45:33 +0000 (0:01:00.973) 0:01:05.484 ******** 2026-04-02 00:45:39.233887 | orchestrator | changed: [testbed-manager] 2026-04-02 00:45:39.233893 | orchestrator | 2026-04-02 00:45:39.233899 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-02 00:45:39.233905 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-02 00:45:39.233912 | orchestrator | 2026-04-02 00:45:39.233919 | orchestrator | 2026-04-02 00:45:39.233925 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-02 00:45:39.233938 | orchestrator | Thursday 02 April 2026 00:45:36 +0000 (0:00:03.243) 0:01:08.728 ******** 2026-04-02 00:45:39.233945 | orchestrator | =============================================================================== 2026-04-02 00:45:39.233951 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 60.97s 2026-04-02 00:45:39.233957 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 3.24s 2026-04-02 00:45:39.233962 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 2.23s 2026-04-02 00:45:39.233969 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.29s 2026-04-02 00:45:39.233974 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.70s 2026-04-02 00:45:39.233982 | orchestrator | 2026-04-02 00:45:39 | INFO  | Task b630af42-1996-49ce-b498-b2dda15cede5 is in state SUCCESS 2026-04-02 00:45:39.234220 | orchestrator | 2026-04-02 00:45:39 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:45:39.237231 | orchestrator | 2026-04-02 00:45:39 | INFO  | Task 7623ae9a-acfb-4a64-9631-61cbce034143 is in state STARTED 2026-04-02 00:45:39.238453 | orchestrator | 2026-04-02 00:45:39 | INFO  | Task 72f0e25c-8ca4-4dab-bfc3-7f1deda9a5ce is in state STARTED 2026-04-02 00:45:39.239915 | orchestrator | 2026-04-02 00:45:39 | INFO  | Task 38e58049-1ce5-4398-b669-d11519540f43 is in state STARTED 2026-04-02 00:45:39.239952 | orchestrator | 2026-04-02 00:45:39 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:45:42.291418 | orchestrator | 2026-04-02 00:45:42 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:45:42.294249 | orchestrator | 2026-04-02 00:45:42 | INFO  | Task 7623ae9a-acfb-4a64-9631-61cbce034143 is in state STARTED 2026-04-02 00:45:42.297374 | orchestrator | 2026-04-02 00:45:42 | INFO  | Task 72f0e25c-8ca4-4dab-bfc3-7f1deda9a5ce is in state STARTED 2026-04-02 00:45:42.301092 | orchestrator | 2026-04-02 00:45:42 | INFO  | Task 38e58049-1ce5-4398-b669-d11519540f43 is in state STARTED 2026-04-02 00:45:42.301144 | orchestrator | 2026-04-02 00:45:42 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:45:45.348646 | orchestrator | 2026-04-02 00:45:45 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:45:45.350552 | orchestrator | 2026-04-02 00:45:45 | INFO  | Task 7623ae9a-acfb-4a64-9631-61cbce034143 is in state STARTED 2026-04-02 00:45:45.351767 | orchestrator | 2026-04-02 00:45:45 | INFO  | Task 72f0e25c-8ca4-4dab-bfc3-7f1deda9a5ce is in state SUCCESS 2026-04-02 00:45:45.351914 | orchestrator | 2026-04-02 00:45:45.351926 | orchestrator | 2026-04-02 00:45:45.351932 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-02 00:45:45.351938 | orchestrator | 2026-04-02 00:45:45.351943 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-02 00:45:45.351949 | orchestrator | Thursday 02 April 2026 00:44:10 +0000 (0:00:00.575) 0:00:00.575 ******** 2026-04-02 00:45:45.351955 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-04-02 00:45:45.351961 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-04-02 00:45:45.351966 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-04-02 00:45:45.351971 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-04-02 00:45:45.351977 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-04-02 00:45:45.351982 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-04-02 00:45:45.351988 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-04-02 00:45:45.351993 | orchestrator | 2026-04-02 00:45:45.351999 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-04-02 00:45:45.352004 | orchestrator | 2026-04-02 00:45:45.352010 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-04-02 00:45:45.352016 | orchestrator | Thursday 02 April 2026 00:44:12 +0000 (0:00:01.760) 0:00:02.335 ******** 2026-04-02 00:45:45.352032 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-02 00:45:45.352038 | orchestrator | 2026-04-02 00:45:45.352044 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-04-02 00:45:45.352049 | orchestrator | Thursday 02 April 2026 00:44:14 +0000 (0:00:01.790) 0:00:04.126 ******** 2026-04-02 00:45:45.352054 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:45:45.352061 | orchestrator | ok: [testbed-manager] 2026-04-02 00:45:45.352066 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:45:45.352072 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:45:45.352077 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:45:45.352082 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:45:45.352087 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:45:45.352093 | orchestrator | 2026-04-02 00:45:45.352098 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-04-02 00:45:45.352116 | orchestrator | Thursday 02 April 2026 00:44:18 +0000 (0:00:04.450) 0:00:08.576 ******** 2026-04-02 00:45:45.352122 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:45:45.352127 | orchestrator | ok: [testbed-manager] 2026-04-02 00:45:45.352133 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:45:45.352138 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:45:45.352143 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:45:45.352148 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:45:45.352153 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:45:45.352158 | orchestrator | 2026-04-02 00:45:45.352164 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-04-02 00:45:45.352169 | orchestrator | Thursday 02 April 2026 00:44:22 +0000 (0:00:03.851) 0:00:12.428 ******** 2026-04-02 00:45:45.352174 | orchestrator | changed: [testbed-manager] 2026-04-02 00:45:45.352180 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:45:45.352185 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:45:45.352190 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:45:45.352195 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:45:45.352201 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:45:45.352206 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:45:45.352212 | orchestrator | 2026-04-02 00:45:45.352217 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-04-02 00:45:45.352223 | orchestrator | Thursday 02 April 2026 00:44:25 +0000 (0:00:02.666) 0:00:15.094 ******** 2026-04-02 00:45:45.352228 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:45:45.352233 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:45:45.352238 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:45:45.352243 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:45:45.352249 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:45:45.352254 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:45:45.352259 | orchestrator | changed: [testbed-manager] 2026-04-02 00:45:45.352264 | orchestrator | 2026-04-02 00:45:45.352270 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-04-02 00:45:45.352275 | orchestrator | Thursday 02 April 2026 00:44:35 +0000 (0:00:10.134) 0:00:25.228 ******** 2026-04-02 00:45:45.352280 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:45:45.352286 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:45:45.352337 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:45:45.352342 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:45:45.352346 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:45:45.352351 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:45:45.352356 | orchestrator | changed: [testbed-manager] 2026-04-02 00:45:45.352361 | orchestrator | 2026-04-02 00:45:45.352365 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-04-02 00:45:45.352370 | orchestrator | Thursday 02 April 2026 00:45:17 +0000 (0:00:41.775) 0:01:07.003 ******** 2026-04-02 00:45:45.352376 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-02 00:45:45.352381 | orchestrator | 2026-04-02 00:45:45.352386 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-04-02 00:45:45.352391 | orchestrator | Thursday 02 April 2026 00:45:18 +0000 (0:00:01.666) 0:01:08.670 ******** 2026-04-02 00:45:45.352395 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-04-02 00:45:45.352400 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-04-02 00:45:45.352405 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-04-02 00:45:45.352410 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-04-02 00:45:45.352421 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-04-02 00:45:45.352426 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-04-02 00:45:45.352431 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-04-02 00:45:45.352436 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-04-02 00:45:45.352445 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-04-02 00:45:45.352450 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-04-02 00:45:45.352455 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-04-02 00:45:45.352459 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-04-02 00:45:45.352464 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-04-02 00:45:45.352469 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-04-02 00:45:45.352473 | orchestrator | 2026-04-02 00:45:45.352478 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-04-02 00:45:45.352484 | orchestrator | Thursday 02 April 2026 00:45:22 +0000 (0:00:03.913) 0:01:12.583 ******** 2026-04-02 00:45:45.352512 | orchestrator | ok: [testbed-manager] 2026-04-02 00:45:45.352517 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:45:45.352522 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:45:45.352527 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:45:45.352531 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:45:45.352536 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:45:45.352541 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:45:45.352545 | orchestrator | 2026-04-02 00:45:45.352550 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-04-02 00:45:45.352555 | orchestrator | Thursday 02 April 2026 00:45:23 +0000 (0:00:01.156) 0:01:13.740 ******** 2026-04-02 00:45:45.352560 | orchestrator | changed: [testbed-manager] 2026-04-02 00:45:45.352565 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:45:45.352570 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:45:45.352574 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:45:45.352579 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:45:45.352584 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:45:45.352589 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:45:45.352594 | orchestrator | 2026-04-02 00:45:45.352599 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-04-02 00:45:45.352603 | orchestrator | Thursday 02 April 2026 00:45:25 +0000 (0:00:01.262) 0:01:15.002 ******** 2026-04-02 00:45:45.352608 | orchestrator | ok: [testbed-manager] 2026-04-02 00:45:45.352613 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:45:45.352618 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:45:45.352623 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:45:45.352627 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:45:45.352632 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:45:45.352637 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:45:45.352642 | orchestrator | 2026-04-02 00:45:45.352647 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-04-02 00:45:45.352651 | orchestrator | Thursday 02 April 2026 00:45:26 +0000 (0:00:01.385) 0:01:16.387 ******** 2026-04-02 00:45:45.352656 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:45:45.352667 | orchestrator | ok: [testbed-manager] 2026-04-02 00:45:45.352672 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:45:45.352677 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:45:45.352682 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:45:45.352687 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:45:45.352692 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:45:45.352697 | orchestrator | 2026-04-02 00:45:45.352702 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-04-02 00:45:45.352708 | orchestrator | Thursday 02 April 2026 00:45:28 +0000 (0:00:01.630) 0:01:18.018 ******** 2026-04-02 00:45:45.352713 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-04-02 00:45:45.352719 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-02 00:45:45.352725 | orchestrator | 2026-04-02 00:45:45.352730 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-04-02 00:45:45.352738 | orchestrator | Thursday 02 April 2026 00:45:29 +0000 (0:00:01.465) 0:01:19.483 ******** 2026-04-02 00:45:45.352743 | orchestrator | changed: [testbed-manager] 2026-04-02 00:45:45.352748 | orchestrator | 2026-04-02 00:45:45.352753 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-04-02 00:45:45.352758 | orchestrator | Thursday 02 April 2026 00:45:31 +0000 (0:00:01.869) 0:01:21.352 ******** 2026-04-02 00:45:45.352763 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:45:45.352768 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:45:45.352772 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:45:45.352777 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:45:45.352782 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:45:45.352789 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:45:45.352794 | orchestrator | changed: [testbed-manager] 2026-04-02 00:45:45.352799 | orchestrator | 2026-04-02 00:45:45.352804 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-02 00:45:45.352809 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-02 00:45:45.352816 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-02 00:45:45.352821 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-02 00:45:45.352826 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-02 00:45:45.352834 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-02 00:45:45.352839 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-02 00:45:45.352844 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-02 00:45:45.352849 | orchestrator | 2026-04-02 00:45:45.352854 | orchestrator | 2026-04-02 00:45:45.352858 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-02 00:45:45.352869 | orchestrator | Thursday 02 April 2026 00:45:42 +0000 (0:00:10.978) 0:01:32.331 ******** 2026-04-02 00:45:45.352920 | orchestrator | =============================================================================== 2026-04-02 00:45:45.352926 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 41.78s 2026-04-02 00:45:45.352931 | orchestrator | osism.services.netdata : Restart service netdata ----------------------- 10.98s 2026-04-02 00:45:45.352936 | orchestrator | osism.services.netdata : Add repository -------------------------------- 10.13s 2026-04-02 00:45:45.352940 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 4.45s 2026-04-02 00:45:45.352945 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 3.91s 2026-04-02 00:45:45.352950 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.85s 2026-04-02 00:45:45.352955 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.66s 2026-04-02 00:45:45.352960 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 1.87s 2026-04-02 00:45:45.352964 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.79s 2026-04-02 00:45:45.352969 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.76s 2026-04-02 00:45:45.352973 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.67s 2026-04-02 00:45:45.352978 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 1.63s 2026-04-02 00:45:45.352983 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.47s 2026-04-02 00:45:45.352991 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.39s 2026-04-02 00:45:45.352996 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.26s 2026-04-02 00:45:45.353001 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.16s 2026-04-02 00:45:45.353737 | orchestrator | 2026-04-02 00:45:45 | INFO  | Task 38e58049-1ce5-4398-b669-d11519540f43 is in state STARTED 2026-04-02 00:45:45.354128 | orchestrator | 2026-04-02 00:45:45 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:45:48.398945 | orchestrator | 2026-04-02 00:45:48 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:45:48.399812 | orchestrator | 2026-04-02 00:45:48 | INFO  | Task 7623ae9a-acfb-4a64-9631-61cbce034143 is in state STARTED 2026-04-02 00:45:48.400543 | orchestrator | 2026-04-02 00:45:48 | INFO  | Task 38e58049-1ce5-4398-b669-d11519540f43 is in state STARTED 2026-04-02 00:45:48.400578 | orchestrator | 2026-04-02 00:45:48 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:45:51.438470 | orchestrator | 2026-04-02 00:45:51 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:45:51.439476 | orchestrator | 2026-04-02 00:45:51 | INFO  | Task 7623ae9a-acfb-4a64-9631-61cbce034143 is in state STARTED 2026-04-02 00:45:51.442555 | orchestrator | 2026-04-02 00:45:51 | INFO  | Task 38e58049-1ce5-4398-b669-d11519540f43 is in state STARTED 2026-04-02 00:45:51.442617 | orchestrator | 2026-04-02 00:45:51 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:45:54.474634 | orchestrator | 2026-04-02 00:45:54 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:45:54.478147 | orchestrator | 2026-04-02 00:45:54 | INFO  | Task 7623ae9a-acfb-4a64-9631-61cbce034143 is in state STARTED 2026-04-02 00:45:54.478192 | orchestrator | 2026-04-02 00:45:54 | INFO  | Task 38e58049-1ce5-4398-b669-d11519540f43 is in state STARTED 2026-04-02 00:45:54.478198 | orchestrator | 2026-04-02 00:45:54 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:45:57.517627 | orchestrator | 2026-04-02 00:45:57 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:45:57.518166 | orchestrator | 2026-04-02 00:45:57 | INFO  | Task 7623ae9a-acfb-4a64-9631-61cbce034143 is in state STARTED 2026-04-02 00:45:57.518808 | orchestrator | 2026-04-02 00:45:57 | INFO  | Task 38e58049-1ce5-4398-b669-d11519540f43 is in state STARTED 2026-04-02 00:45:57.518834 | orchestrator | 2026-04-02 00:45:57 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:46:00.568388 | orchestrator | 2026-04-02 00:46:00 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:46:00.569856 | orchestrator | 2026-04-02 00:46:00 | INFO  | Task 7623ae9a-acfb-4a64-9631-61cbce034143 is in state STARTED 2026-04-02 00:46:00.571459 | orchestrator | 2026-04-02 00:46:00 | INFO  | Task 38e58049-1ce5-4398-b669-d11519540f43 is in state STARTED 2026-04-02 00:46:00.571493 | orchestrator | 2026-04-02 00:46:00 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:46:03.611896 | orchestrator | 2026-04-02 00:46:03 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:46:03.614009 | orchestrator | 2026-04-02 00:46:03 | INFO  | Task 7623ae9a-acfb-4a64-9631-61cbce034143 is in state STARTED 2026-04-02 00:46:03.614555 | orchestrator | 2026-04-02 00:46:03 | INFO  | Task 38e58049-1ce5-4398-b669-d11519540f43 is in state STARTED 2026-04-02 00:46:03.614583 | orchestrator | 2026-04-02 00:46:03 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:46:06.655260 | orchestrator | 2026-04-02 00:46:06 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:46:06.656485 | orchestrator | 2026-04-02 00:46:06 | INFO  | Task 7623ae9a-acfb-4a64-9631-61cbce034143 is in state STARTED 2026-04-02 00:46:06.658672 | orchestrator | 2026-04-02 00:46:06 | INFO  | Task 38e58049-1ce5-4398-b669-d11519540f43 is in state STARTED 2026-04-02 00:46:06.658704 | orchestrator | 2026-04-02 00:46:06 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:46:09.719953 | orchestrator | 2026-04-02 00:46:09 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:46:09.720508 | orchestrator | 2026-04-02 00:46:09 | INFO  | Task 7623ae9a-acfb-4a64-9631-61cbce034143 is in state STARTED 2026-04-02 00:46:09.721028 | orchestrator | 2026-04-02 00:46:09 | INFO  | Task 38e58049-1ce5-4398-b669-d11519540f43 is in state STARTED 2026-04-02 00:46:09.721055 | orchestrator | 2026-04-02 00:46:09 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:46:12.752582 | orchestrator | 2026-04-02 00:46:12 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:46:12.753631 | orchestrator | 2026-04-02 00:46:12 | INFO  | Task 7623ae9a-acfb-4a64-9631-61cbce034143 is in state STARTED 2026-04-02 00:46:12.754613 | orchestrator | 2026-04-02 00:46:12 | INFO  | Task 38e58049-1ce5-4398-b669-d11519540f43 is in state STARTED 2026-04-02 00:46:12.754642 | orchestrator | 2026-04-02 00:46:12 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:46:15.791106 | orchestrator | 2026-04-02 00:46:15 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:46:15.792881 | orchestrator | 2026-04-02 00:46:15 | INFO  | Task 7623ae9a-acfb-4a64-9631-61cbce034143 is in state STARTED 2026-04-02 00:46:15.794581 | orchestrator | 2026-04-02 00:46:15 | INFO  | Task 38e58049-1ce5-4398-b669-d11519540f43 is in state STARTED 2026-04-02 00:46:15.794631 | orchestrator | 2026-04-02 00:46:15 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:46:18.834056 | orchestrator | 2026-04-02 00:46:18 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:46:18.834464 | orchestrator | 2026-04-02 00:46:18 | INFO  | Task 7623ae9a-acfb-4a64-9631-61cbce034143 is in state STARTED 2026-04-02 00:46:18.836029 | orchestrator | 2026-04-02 00:46:18 | INFO  | Task 38e58049-1ce5-4398-b669-d11519540f43 is in state STARTED 2026-04-02 00:46:18.836063 | orchestrator | 2026-04-02 00:46:18 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:46:21.864827 | orchestrator | 2026-04-02 00:46:21 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:46:21.864927 | orchestrator | 2026-04-02 00:46:21 | INFO  | Task dd98c9fc-3acb-407c-90c2-4694aef12ba2 is in state STARTED 2026-04-02 00:46:21.865235 | orchestrator | 2026-04-02 00:46:21 | INFO  | Task cadb2cbf-0f5f-4d95-8a96-a5b07565ba81 is in state STARTED 2026-04-02 00:46:21.866064 | orchestrator | 2026-04-02 00:46:21 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:46:21.869083 | orchestrator | 2026-04-02 00:46:21 | INFO  | Task 7623ae9a-acfb-4a64-9631-61cbce034143 is in state STARTED 2026-04-02 00:46:21.871985 | orchestrator | 2026-04-02 00:46:21 | INFO  | Task 38e58049-1ce5-4398-b669-d11519540f43 is in state SUCCESS 2026-04-02 00:46:21.874434 | orchestrator | 2026-04-02 00:46:21.874539 | orchestrator | 2026-04-02 00:46:21.874561 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-04-02 00:46:21.874571 | orchestrator | 2026-04-02 00:46:21.874581 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-04-02 00:46:21.874594 | orchestrator | Thursday 02 April 2026 00:44:03 +0000 (0:00:00.296) 0:00:00.296 ******** 2026-04-02 00:46:21.874634 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-02 00:46:21.874651 | orchestrator | 2026-04-02 00:46:21.874663 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-04-02 00:46:21.874675 | orchestrator | Thursday 02 April 2026 00:44:04 +0000 (0:00:01.103) 0:00:01.400 ******** 2026-04-02 00:46:21.874687 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-02 00:46:21.874815 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-02 00:46:21.874828 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-02 00:46:21.874835 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-02 00:46:21.874843 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-02 00:46:21.874850 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-02 00:46:21.874857 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-02 00:46:21.874864 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-02 00:46:21.874871 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-02 00:46:21.874878 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-02 00:46:21.874885 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-02 00:46:21.874892 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-02 00:46:21.874898 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-02 00:46:21.874905 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-02 00:46:21.874912 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-02 00:46:21.874918 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-02 00:46:21.874925 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-02 00:46:21.874931 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-02 00:46:21.874938 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-02 00:46:21.874944 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-02 00:46:21.874951 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-02 00:46:21.874958 | orchestrator | 2026-04-02 00:46:21.874964 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-04-02 00:46:21.874970 | orchestrator | Thursday 02 April 2026 00:44:09 +0000 (0:00:04.315) 0:00:05.715 ******** 2026-04-02 00:46:21.874977 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-02 00:46:21.874985 | orchestrator | 2026-04-02 00:46:21.874992 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-04-02 00:46:21.875000 | orchestrator | Thursday 02 April 2026 00:44:10 +0000 (0:00:01.325) 0:00:07.041 ******** 2026-04-02 00:46:21.875013 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-02 00:46:21.875046 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-02 00:46:21.875074 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-02 00:46:21.875083 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-02 00:46:21.875091 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-02 00:46:21.875100 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-02 00:46:21.875107 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-02 00:46:21.875115 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:46:21.875133 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:46:21.875153 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:46:21.875161 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:46:21.875169 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:46:21.875178 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:46:21.875185 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:46:21.875195 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:46:21.875217 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:46:21.875229 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:46:21.875242 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:46:21.875250 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:46:21.875258 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:46:21.875289 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:46:21.875297 | orchestrator | 2026-04-02 00:46:21.875327 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-04-02 00:46:21.875336 | orchestrator | Thursday 02 April 2026 00:44:15 +0000 (0:00:05.469) 0:00:12.511 ******** 2026-04-02 00:46:21.875345 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-02 00:46:21.875354 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-02 00:46:21.875367 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-02 00:46:21.875374 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-02 00:46:21.875400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-02 00:46:21.875407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-02 00:46:21.875414 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:46:21.875422 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-02 00:46:21.875429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-02 00:46:21.875436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-02 00:46:21.875448 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-02 00:46:21.875455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-02 00:46:21.875465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-02 00:46:21.875472 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:46:21.875488 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-02 00:46:21.875496 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-02 00:46:21.875503 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-02 00:46:21.875510 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:46:21.875516 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:46:21.875523 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:46:21.875530 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-02 00:46:21.875545 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-02 00:46:21.875557 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-02 00:46:21.875575 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:46:21.875593 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-02 00:46:21.875612 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-02 00:46:21.875625 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-02 00:46:21.875636 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:46:21.875646 | orchestrator | 2026-04-02 00:46:21.875657 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-04-02 00:46:21.875669 | orchestrator | Thursday 02 April 2026 00:44:18 +0000 (0:00:02.490) 0:00:15.002 ******** 2026-04-02 00:46:21.875681 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-02 00:46:21.875694 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-02 00:46:21.875716 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-02 00:46:21.875729 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:46:21.875741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-02 00:46:21.875754 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-02 00:46:21.875773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-02 00:46:21.875782 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:46:21.875799 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-02 00:46:21.875807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-02 00:46:21.875814 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-02 00:46:21.875828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-02 00:46:21.875840 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-02 00:46:21.875850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-02 00:46:21.875861 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:46:21.875873 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:46:21.875885 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-02 00:46:21.876762 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-02 00:46:21.876797 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-02 00:46:21.876809 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:46:21.876818 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-02 00:46:21.876825 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-02 00:46:21.876840 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-02 00:46:21.876847 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:46:21.876853 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-02 00:46:21.876860 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-02 00:46:21.876872 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-02 00:46:21.876879 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:46:21.876885 | orchestrator | 2026-04-02 00:46:21.876892 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-04-02 00:46:21.876899 | orchestrator | Thursday 02 April 2026 00:44:21 +0000 (0:00:02.738) 0:00:17.740 ******** 2026-04-02 00:46:21.876906 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:46:21.876912 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:46:21.876918 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:46:21.876924 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:46:21.876930 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:46:21.876943 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:46:21.876950 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:46:21.876956 | orchestrator | 2026-04-02 00:46:21.876962 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-04-02 00:46:21.876969 | orchestrator | Thursday 02 April 2026 00:44:22 +0000 (0:00:01.458) 0:00:19.199 ******** 2026-04-02 00:46:21.876975 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:46:21.876981 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:46:21.876987 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:46:21.876993 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:46:21.876999 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:46:21.877013 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:46:21.877023 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:46:21.877032 | orchestrator | 2026-04-02 00:46:21.877042 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-04-02 00:46:21.877051 | orchestrator | Thursday 02 April 2026 00:44:24 +0000 (0:00:01.456) 0:00:20.655 ******** 2026-04-02 00:46:21.877060 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-02 00:46:21.877070 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-02 00:46:21.877080 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-02 00:46:21.877090 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-02 00:46:21.877101 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-02 00:46:21.877115 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:46:21.877133 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-02 00:46:21.877151 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:46:21.877161 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:46:21.877171 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:46:21.877182 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:46:21.877193 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-02 00:46:21.877204 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:46:21.877219 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:46:21.877235 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:46:21.877242 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:46:21.877248 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:46:21.877255 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:46:21.877286 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:46:21.877298 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:46:21.877305 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:46:21.877311 | orchestrator | 2026-04-02 00:46:21.877317 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-04-02 00:46:21.877324 | orchestrator | Thursday 02 April 2026 00:44:30 +0000 (0:00:06.656) 0:00:27.312 ******** 2026-04-02 00:46:21.877330 | orchestrator | [WARNING]: Skipped 2026-04-02 00:46:21.877340 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-04-02 00:46:21.877352 | orchestrator | to this access issue: 2026-04-02 00:46:21.877360 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-04-02 00:46:21.877368 | orchestrator | directory 2026-04-02 00:46:21.877374 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-02 00:46:21.877382 | orchestrator | 2026-04-02 00:46:21.877389 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-04-02 00:46:21.877396 | orchestrator | Thursday 02 April 2026 00:44:31 +0000 (0:00:01.128) 0:00:28.440 ******** 2026-04-02 00:46:21.877403 | orchestrator | [WARNING]: Skipped 2026-04-02 00:46:21.877410 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-04-02 00:46:21.877421 | orchestrator | to this access issue: 2026-04-02 00:46:21.877429 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-04-02 00:46:21.877436 | orchestrator | directory 2026-04-02 00:46:21.877443 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-02 00:46:21.877451 | orchestrator | 2026-04-02 00:46:21.877458 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-04-02 00:46:21.877466 | orchestrator | Thursday 02 April 2026 00:44:33 +0000 (0:00:01.245) 0:00:29.686 ******** 2026-04-02 00:46:21.877473 | orchestrator | [WARNING]: Skipped 2026-04-02 00:46:21.877481 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-04-02 00:46:21.877488 | orchestrator | to this access issue: 2026-04-02 00:46:21.877494 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-04-02 00:46:21.877500 | orchestrator | directory 2026-04-02 00:46:21.877507 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-02 00:46:21.877513 | orchestrator | 2026-04-02 00:46:21.877519 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-04-02 00:46:21.877525 | orchestrator | Thursday 02 April 2026 00:44:33 +0000 (0:00:00.814) 0:00:30.501 ******** 2026-04-02 00:46:21.877531 | orchestrator | [WARNING]: Skipped 2026-04-02 00:46:21.877538 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-04-02 00:46:21.877544 | orchestrator | to this access issue: 2026-04-02 00:46:21.877550 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-04-02 00:46:21.877556 | orchestrator | directory 2026-04-02 00:46:21.877562 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-02 00:46:21.877568 | orchestrator | 2026-04-02 00:46:21.877574 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-04-02 00:46:21.877580 | orchestrator | Thursday 02 April 2026 00:44:34 +0000 (0:00:00.684) 0:00:31.185 ******** 2026-04-02 00:46:21.877587 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:46:21.877593 | orchestrator | changed: [testbed-manager] 2026-04-02 00:46:21.877599 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:46:21.877605 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:46:21.877612 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:46:21.877618 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:46:21.877624 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:46:21.877630 | orchestrator | 2026-04-02 00:46:21.877636 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-04-02 00:46:21.877642 | orchestrator | Thursday 02 April 2026 00:44:38 +0000 (0:00:04.370) 0:00:35.556 ******** 2026-04-02 00:46:21.877649 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-02 00:46:21.877655 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-02 00:46:21.877662 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-02 00:46:21.877668 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-02 00:46:21.877674 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-02 00:46:21.877685 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-02 00:46:21.877691 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-02 00:46:21.877697 | orchestrator | 2026-04-02 00:46:21.877704 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-04-02 00:46:21.877710 | orchestrator | Thursday 02 April 2026 00:44:42 +0000 (0:00:03.147) 0:00:38.704 ******** 2026-04-02 00:46:21.877716 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:46:21.877722 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:46:21.877728 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:46:21.877734 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:46:21.877740 | orchestrator | changed: [testbed-manager] 2026-04-02 00:46:21.877747 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:46:21.877753 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:46:21.877759 | orchestrator | 2026-04-02 00:46:21.877765 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-04-02 00:46:21.877772 | orchestrator | Thursday 02 April 2026 00:44:44 +0000 (0:00:02.323) 0:00:41.027 ******** 2026-04-02 00:46:21.877778 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-02 00:46:21.877793 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-02 00:46:21.877800 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-02 00:46:21.877806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-02 00:46:21.877813 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-02 00:46:21.877824 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-02 00:46:21.877830 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:46:21.877838 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:46:21.877847 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-02 00:46:21.877858 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-02 00:46:21.877864 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:46:21.877871 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:46:21.877878 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-02 00:46:21.877888 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-02 00:46:21.877895 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:46:21.877902 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-02 00:46:21.877911 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-02 00:46:21.877921 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-02 00:46:21.877929 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-02 00:46:21.877935 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:46:21.877941 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:46:21.877952 | orchestrator | 2026-04-02 00:46:21.877958 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-04-02 00:46:21.877965 | orchestrator | Thursday 02 April 2026 00:44:48 +0000 (0:00:04.252) 0:00:45.280 ******** 2026-04-02 00:46:21.877971 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-02 00:46:21.877977 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-02 00:46:21.877984 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-02 00:46:21.877990 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-02 00:46:21.877996 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-02 00:46:21.878002 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-02 00:46:21.878008 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-02 00:46:21.878046 | orchestrator | 2026-04-02 00:46:21.878053 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-04-02 00:46:21.878059 | orchestrator | Thursday 02 April 2026 00:44:51 +0000 (0:00:02.647) 0:00:47.927 ******** 2026-04-02 00:46:21.878066 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-02 00:46:21.878072 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-02 00:46:21.878078 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-02 00:46:21.878085 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-02 00:46:21.878091 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-02 00:46:21.878097 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-02 00:46:21.878103 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-02 00:46:21.878110 | orchestrator | 2026-04-02 00:46:21.878116 | orchestrator | TASK [common : Check common containers] **************************************** 2026-04-02 00:46:21.878122 | orchestrator | Thursday 02 April 2026 00:44:54 +0000 (0:00:02.904) 0:00:50.831 ******** 2026-04-02 00:46:21.878134 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-02 00:46:21.878147 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-02 00:46:21.878154 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-02 00:46:21.878165 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-02 00:46:21.878171 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-02 00:46:21.878178 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:46:21.878185 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-02 00:46:21.878194 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:46:21.878205 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:46:21.878212 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-02 00:46:21.878222 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:46:21.878229 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:46:21.878235 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:46:21.878242 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:46:21.878249 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:46:21.878259 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:46:21.878287 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:46:21.878299 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:46:21.878306 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:46:21.878312 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:46:21.878319 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:46:21.878325 | orchestrator | 2026-04-02 00:46:21.878332 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-04-02 00:46:21.878338 | orchestrator | Thursday 02 April 2026 00:44:57 +0000 (0:00:03.322) 0:00:54.154 ******** 2026-04-02 00:46:21.878344 | orchestrator | changed: [testbed-manager] 2026-04-02 00:46:21.878351 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:46:21.878357 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:46:21.878363 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:46:21.878369 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:46:21.878375 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:46:21.878382 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:46:21.878388 | orchestrator | 2026-04-02 00:46:21.878394 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-04-02 00:46:21.878400 | orchestrator | Thursday 02 April 2026 00:44:59 +0000 (0:00:01.592) 0:00:55.746 ******** 2026-04-02 00:46:21.878407 | orchestrator | changed: [testbed-manager] 2026-04-02 00:46:21.878413 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:46:21.878419 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:46:21.878425 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:46:21.878432 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:46:21.878438 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:46:21.878444 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:46:21.878450 | orchestrator | 2026-04-02 00:46:21.878456 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-02 00:46:21.878463 | orchestrator | Thursday 02 April 2026 00:45:00 +0000 (0:00:01.279) 0:00:57.026 ******** 2026-04-02 00:46:21.878469 | orchestrator | 2026-04-02 00:46:21.878475 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-02 00:46:21.878481 | orchestrator | Thursday 02 April 2026 00:45:00 +0000 (0:00:00.071) 0:00:57.098 ******** 2026-04-02 00:46:21.878488 | orchestrator | 2026-04-02 00:46:21.878494 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-02 00:46:21.878504 | orchestrator | Thursday 02 April 2026 00:45:00 +0000 (0:00:00.064) 0:00:57.162 ******** 2026-04-02 00:46:21.878510 | orchestrator | 2026-04-02 00:46:21.878516 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-02 00:46:21.878523 | orchestrator | Thursday 02 April 2026 00:45:00 +0000 (0:00:00.068) 0:00:57.231 ******** 2026-04-02 00:46:21.878529 | orchestrator | 2026-04-02 00:46:21.878535 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-02 00:46:21.878545 | orchestrator | Thursday 02 April 2026 00:45:00 +0000 (0:00:00.061) 0:00:57.292 ******** 2026-04-02 00:46:21.878551 | orchestrator | 2026-04-02 00:46:21.878557 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-02 00:46:21.878564 | orchestrator | Thursday 02 April 2026 00:45:00 +0000 (0:00:00.061) 0:00:57.353 ******** 2026-04-02 00:46:21.878570 | orchestrator | 2026-04-02 00:46:21.878576 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-02 00:46:21.878582 | orchestrator | Thursday 02 April 2026 00:45:00 +0000 (0:00:00.059) 0:00:57.413 ******** 2026-04-02 00:46:21.878588 | orchestrator | 2026-04-02 00:46:21.878595 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-04-02 00:46:21.878605 | orchestrator | Thursday 02 April 2026 00:45:00 +0000 (0:00:00.080) 0:00:57.493 ******** 2026-04-02 00:46:21.878611 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:46:21.878618 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:46:21.878624 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:46:21.878630 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:46:21.878636 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:46:21.878643 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:46:21.878649 | orchestrator | changed: [testbed-manager] 2026-04-02 00:46:21.878655 | orchestrator | 2026-04-02 00:46:21.878661 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-04-02 00:46:21.878668 | orchestrator | Thursday 02 April 2026 00:45:31 +0000 (0:00:30.892) 0:01:28.386 ******** 2026-04-02 00:46:21.878674 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:46:21.878680 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:46:21.878686 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:46:21.878693 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:46:21.878699 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:46:21.878705 | orchestrator | changed: [testbed-manager] 2026-04-02 00:46:21.878711 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:46:21.878717 | orchestrator | 2026-04-02 00:46:21.878724 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-04-02 00:46:21.878730 | orchestrator | Thursday 02 April 2026 00:46:08 +0000 (0:00:37.212) 0:02:05.598 ******** 2026-04-02 00:46:21.878736 | orchestrator | ok: [testbed-manager] 2026-04-02 00:46:21.878742 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:46:21.878749 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:46:21.878755 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:46:21.878761 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:46:21.878768 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:46:21.878774 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:46:21.878780 | orchestrator | 2026-04-02 00:46:21.878787 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-04-02 00:46:21.878793 | orchestrator | Thursday 02 April 2026 00:46:10 +0000 (0:00:01.784) 0:02:07.383 ******** 2026-04-02 00:46:21.878799 | orchestrator | changed: [testbed-manager] 2026-04-02 00:46:21.878809 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:46:21.878818 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:46:21.878829 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:46:21.878838 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:46:21.878853 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:46:21.878866 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:46:21.878875 | orchestrator | 2026-04-02 00:46:21.878885 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-02 00:46:21.878904 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-02 00:46:21.878915 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-02 00:46:21.878925 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-02 00:46:21.878936 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-02 00:46:21.878947 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-02 00:46:21.878956 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-02 00:46:21.878967 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-02 00:46:21.878977 | orchestrator | 2026-04-02 00:46:21.878988 | orchestrator | 2026-04-02 00:46:21.878998 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-02 00:46:21.879009 | orchestrator | Thursday 02 April 2026 00:46:19 +0000 (0:00:09.202) 0:02:16.585 ******** 2026-04-02 00:46:21.879018 | orchestrator | =============================================================================== 2026-04-02 00:46:21.879027 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 37.21s 2026-04-02 00:46:21.879036 | orchestrator | common : Restart fluentd container ------------------------------------- 30.89s 2026-04-02 00:46:21.879045 | orchestrator | common : Restart cron container ----------------------------------------- 9.20s 2026-04-02 00:46:21.879054 | orchestrator | common : Copying over config.json files for services -------------------- 6.66s 2026-04-02 00:46:21.879065 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 5.47s 2026-04-02 00:46:21.879075 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 4.37s 2026-04-02 00:46:21.879085 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.32s 2026-04-02 00:46:21.879095 | orchestrator | common : Ensuring config directories have correct owner and permission --- 4.25s 2026-04-02 00:46:21.879101 | orchestrator | common : Check common containers ---------------------------------------- 3.32s 2026-04-02 00:46:21.879107 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.15s 2026-04-02 00:46:21.879113 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.90s 2026-04-02 00:46:21.879119 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.74s 2026-04-02 00:46:21.879125 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.65s 2026-04-02 00:46:21.879132 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 2.49s 2026-04-02 00:46:21.879145 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.32s 2026-04-02 00:46:21.879151 | orchestrator | common : Initializing toolbox container using normal user --------------- 1.78s 2026-04-02 00:46:21.879157 | orchestrator | common : Creating log volume -------------------------------------------- 1.59s 2026-04-02 00:46:21.879164 | orchestrator | common : Copying over /run subdirectories conf -------------------------- 1.46s 2026-04-02 00:46:21.879170 | orchestrator | common : Restart systemd-tmpfiles --------------------------------------- 1.46s 2026-04-02 00:46:21.879176 | orchestrator | common : include_tasks -------------------------------------------------- 1.33s 2026-04-02 00:46:21.879183 | orchestrator | 2026-04-02 00:46:21 | INFO  | Task 35de4c60-36b7-48b9-8b8a-ca9beb26140c is in state STARTED 2026-04-02 00:46:21.879195 | orchestrator | 2026-04-02 00:46:21 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:46:24.899634 | orchestrator | 2026-04-02 00:46:24 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:46:24.902465 | orchestrator | 2026-04-02 00:46:24 | INFO  | Task dd98c9fc-3acb-407c-90c2-4694aef12ba2 is in state STARTED 2026-04-02 00:46:24.902799 | orchestrator | 2026-04-02 00:46:24 | INFO  | Task cadb2cbf-0f5f-4d95-8a96-a5b07565ba81 is in state STARTED 2026-04-02 00:46:24.903245 | orchestrator | 2026-04-02 00:46:24 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:46:24.903998 | orchestrator | 2026-04-02 00:46:24 | INFO  | Task 7623ae9a-acfb-4a64-9631-61cbce034143 is in state STARTED 2026-04-02 00:46:24.904626 | orchestrator | 2026-04-02 00:46:24 | INFO  | Task 35de4c60-36b7-48b9-8b8a-ca9beb26140c is in state STARTED 2026-04-02 00:46:24.904649 | orchestrator | 2026-04-02 00:46:24 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:46:27.929643 | orchestrator | 2026-04-02 00:46:27 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:46:27.929823 | orchestrator | 2026-04-02 00:46:27 | INFO  | Task dd98c9fc-3acb-407c-90c2-4694aef12ba2 is in state STARTED 2026-04-02 00:46:27.930443 | orchestrator | 2026-04-02 00:46:27 | INFO  | Task cadb2cbf-0f5f-4d95-8a96-a5b07565ba81 is in state STARTED 2026-04-02 00:46:27.931298 | orchestrator | 2026-04-02 00:46:27 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:46:27.931700 | orchestrator | 2026-04-02 00:46:27 | INFO  | Task 7623ae9a-acfb-4a64-9631-61cbce034143 is in state STARTED 2026-04-02 00:46:27.932301 | orchestrator | 2026-04-02 00:46:27 | INFO  | Task 35de4c60-36b7-48b9-8b8a-ca9beb26140c is in state STARTED 2026-04-02 00:46:27.932331 | orchestrator | 2026-04-02 00:46:27 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:46:31.000757 | orchestrator | 2026-04-02 00:46:31 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:46:31.000955 | orchestrator | 2026-04-02 00:46:31 | INFO  | Task dd98c9fc-3acb-407c-90c2-4694aef12ba2 is in state STARTED 2026-04-02 00:46:31.001749 | orchestrator | 2026-04-02 00:46:31 | INFO  | Task cadb2cbf-0f5f-4d95-8a96-a5b07565ba81 is in state STARTED 2026-04-02 00:46:31.002646 | orchestrator | 2026-04-02 00:46:31 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:46:31.003174 | orchestrator | 2026-04-02 00:46:31 | INFO  | Task 7623ae9a-acfb-4a64-9631-61cbce034143 is in state STARTED 2026-04-02 00:46:31.003787 | orchestrator | 2026-04-02 00:46:31 | INFO  | Task 35de4c60-36b7-48b9-8b8a-ca9beb26140c is in state STARTED 2026-04-02 00:46:31.003830 | orchestrator | 2026-04-02 00:46:31 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:46:34.039994 | orchestrator | 2026-04-02 00:46:34 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:46:34.043102 | orchestrator | 2026-04-02 00:46:34 | INFO  | Task dd98c9fc-3acb-407c-90c2-4694aef12ba2 is in state STARTED 2026-04-02 00:46:34.045063 | orchestrator | 2026-04-02 00:46:34 | INFO  | Task cadb2cbf-0f5f-4d95-8a96-a5b07565ba81 is in state STARTED 2026-04-02 00:46:34.047673 | orchestrator | 2026-04-02 00:46:34 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:46:34.048784 | orchestrator | 2026-04-02 00:46:34 | INFO  | Task 7623ae9a-acfb-4a64-9631-61cbce034143 is in state STARTED 2026-04-02 00:46:34.052098 | orchestrator | 2026-04-02 00:46:34 | INFO  | Task 35de4c60-36b7-48b9-8b8a-ca9beb26140c is in state STARTED 2026-04-02 00:46:34.052206 | orchestrator | 2026-04-02 00:46:34 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:46:37.083113 | orchestrator | 2026-04-02 00:46:37 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:46:37.084646 | orchestrator | 2026-04-02 00:46:37 | INFO  | Task dd98c9fc-3acb-407c-90c2-4694aef12ba2 is in state STARTED 2026-04-02 00:46:37.085658 | orchestrator | 2026-04-02 00:46:37 | INFO  | Task cadb2cbf-0f5f-4d95-8a96-a5b07565ba81 is in state STARTED 2026-04-02 00:46:37.086657 | orchestrator | 2026-04-02 00:46:37 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:46:37.088225 | orchestrator | 2026-04-02 00:46:37 | INFO  | Task 7623ae9a-acfb-4a64-9631-61cbce034143 is in state STARTED 2026-04-02 00:46:37.088862 | orchestrator | 2026-04-02 00:46:37 | INFO  | Task 35de4c60-36b7-48b9-8b8a-ca9beb26140c is in state STARTED 2026-04-02 00:46:37.088882 | orchestrator | 2026-04-02 00:46:37 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:46:40.140784 | orchestrator | 2026-04-02 00:46:40 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:46:40.140841 | orchestrator | 2026-04-02 00:46:40 | INFO  | Task dd98c9fc-3acb-407c-90c2-4694aef12ba2 is in state STARTED 2026-04-02 00:46:40.140850 | orchestrator | 2026-04-02 00:46:40 | INFO  | Task cadb2cbf-0f5f-4d95-8a96-a5b07565ba81 is in state STARTED 2026-04-02 00:46:40.140857 | orchestrator | 2026-04-02 00:46:40 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:46:40.140864 | orchestrator | 2026-04-02 00:46:40 | INFO  | Task 7623ae9a-acfb-4a64-9631-61cbce034143 is in state STARTED 2026-04-02 00:46:40.140871 | orchestrator | 2026-04-02 00:46:40 | INFO  | Task 35de4c60-36b7-48b9-8b8a-ca9beb26140c is in state SUCCESS 2026-04-02 00:46:40.140879 | orchestrator | 2026-04-02 00:46:40 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:46:43.162239 | orchestrator | 2026-04-02 00:46:43 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:46:43.164089 | orchestrator | 2026-04-02 00:46:43 | INFO  | Task dd98c9fc-3acb-407c-90c2-4694aef12ba2 is in state SUCCESS 2026-04-02 00:46:43.164881 | orchestrator | 2026-04-02 00:46:43.164903 | orchestrator | 2026-04-02 00:46:43.164911 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-02 00:46:43.164918 | orchestrator | 2026-04-02 00:46:43.164970 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-02 00:46:43.164981 | orchestrator | Thursday 02 April 2026 00:46:23 +0000 (0:00:00.301) 0:00:00.301 ******** 2026-04-02 00:46:43.164988 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:46:43.164995 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:46:43.165002 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:46:43.165017 | orchestrator | 2026-04-02 00:46:43.165023 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-02 00:46:43.165029 | orchestrator | Thursday 02 April 2026 00:46:24 +0000 (0:00:00.312) 0:00:00.613 ******** 2026-04-02 00:46:43.165036 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-04-02 00:46:43.165042 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-04-02 00:46:43.165049 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-04-02 00:46:43.165056 | orchestrator | 2026-04-02 00:46:43.165062 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-04-02 00:46:43.165069 | orchestrator | 2026-04-02 00:46:43.165075 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-04-02 00:46:43.165082 | orchestrator | Thursday 02 April 2026 00:46:24 +0000 (0:00:00.465) 0:00:01.079 ******** 2026-04-02 00:46:43.165095 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 00:46:43.165126 | orchestrator | 2026-04-02 00:46:43.165133 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-04-02 00:46:43.165139 | orchestrator | Thursday 02 April 2026 00:46:25 +0000 (0:00:00.515) 0:00:01.594 ******** 2026-04-02 00:46:43.165146 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-04-02 00:46:43.165153 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-04-02 00:46:43.165159 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-04-02 00:46:43.165165 | orchestrator | 2026-04-02 00:46:43.165172 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-04-02 00:46:43.165178 | orchestrator | Thursday 02 April 2026 00:46:26 +0000 (0:00:01.451) 0:00:03.045 ******** 2026-04-02 00:46:43.165184 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-04-02 00:46:43.165191 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-04-02 00:46:43.165197 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-04-02 00:46:43.165204 | orchestrator | 2026-04-02 00:46:43.165210 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2026-04-02 00:46:43.165226 | orchestrator | Thursday 02 April 2026 00:46:28 +0000 (0:00:01.838) 0:00:04.884 ******** 2026-04-02 00:46:43.165233 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:46:43.165239 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:46:43.165264 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:46:43.165270 | orchestrator | 2026-04-02 00:46:43.165276 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-04-02 00:46:43.165283 | orchestrator | Thursday 02 April 2026 00:46:30 +0000 (0:00:02.085) 0:00:06.970 ******** 2026-04-02 00:46:43.165290 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:46:43.165296 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:46:43.165302 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:46:43.165309 | orchestrator | 2026-04-02 00:46:43.165315 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-02 00:46:43.165322 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-02 00:46:43.165329 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-02 00:46:43.165336 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-02 00:46:43.165342 | orchestrator | 2026-04-02 00:46:43.165348 | orchestrator | 2026-04-02 00:46:43.165354 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-02 00:46:43.165361 | orchestrator | Thursday 02 April 2026 00:46:39 +0000 (0:00:08.704) 0:00:15.675 ******** 2026-04-02 00:46:43.165367 | orchestrator | =============================================================================== 2026-04-02 00:46:43.165373 | orchestrator | memcached : Restart memcached container --------------------------------- 8.70s 2026-04-02 00:46:43.165377 | orchestrator | memcached : Check memcached container ----------------------------------- 2.09s 2026-04-02 00:46:43.165381 | orchestrator | memcached : Copying over config.json files for services ----------------- 1.84s 2026-04-02 00:46:43.165385 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.45s 2026-04-02 00:46:43.165389 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.52s 2026-04-02 00:46:43.165392 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.47s 2026-04-02 00:46:43.165396 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.31s 2026-04-02 00:46:43.165400 | orchestrator | 2026-04-02 00:46:43.165403 | orchestrator | 2026-04-02 00:46:43.165407 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-02 00:46:43.165411 | orchestrator | 2026-04-02 00:46:43.165414 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-02 00:46:43.165422 | orchestrator | Thursday 02 April 2026 00:46:23 +0000 (0:00:00.365) 0:00:00.365 ******** 2026-04-02 00:46:43.165426 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:46:43.165430 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:46:43.165434 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:46:43.165437 | orchestrator | 2026-04-02 00:46:43.165441 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-02 00:46:43.165467 | orchestrator | Thursday 02 April 2026 00:46:23 +0000 (0:00:00.233) 0:00:00.598 ******** 2026-04-02 00:46:43.165472 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-04-02 00:46:43.165476 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-04-02 00:46:43.165480 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-04-02 00:46:43.165483 | orchestrator | 2026-04-02 00:46:43.165487 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-04-02 00:46:43.165491 | orchestrator | 2026-04-02 00:46:43.165495 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-04-02 00:46:43.165499 | orchestrator | Thursday 02 April 2026 00:46:24 +0000 (0:00:00.453) 0:00:01.051 ******** 2026-04-02 00:46:43.165502 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 00:46:43.165506 | orchestrator | 2026-04-02 00:46:43.165510 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-04-02 00:46:43.165514 | orchestrator | Thursday 02 April 2026 00:46:24 +0000 (0:00:00.706) 0:00:01.758 ******** 2026-04-02 00:46:43.165519 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-02 00:46:43.165526 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-02 00:46:43.165533 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-02 00:46:43.165537 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-02 00:46:43.165541 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-02 00:46:43.165551 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-02 00:46:43.165556 | orchestrator | 2026-04-02 00:46:43.165560 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-04-02 00:46:43.165563 | orchestrator | Thursday 02 April 2026 00:46:26 +0000 (0:00:01.913) 0:00:03.671 ******** 2026-04-02 00:46:43.165567 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-02 00:46:43.165572 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-02 00:46:43.165581 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-02 00:46:43.165588 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-02 00:46:43.165599 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-02 00:46:43.165610 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-02 00:46:43.165616 | orchestrator | 2026-04-02 00:46:43.165621 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-04-02 00:46:43.165625 | orchestrator | Thursday 02 April 2026 00:46:29 +0000 (0:00:02.491) 0:00:06.163 ******** 2026-04-02 00:46:43.165630 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-02 00:46:43.165688 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-02 00:46:43.165703 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-02 00:46:43.165711 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-02 00:46:43.165723 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-02 00:46:43.165731 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-02 00:46:43.165738 | orchestrator | 2026-04-02 00:46:43.165748 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2026-04-02 00:46:43.165754 | orchestrator | Thursday 02 April 2026 00:46:32 +0000 (0:00:03.331) 0:00:09.495 ******** 2026-04-02 00:46:43.165761 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-02 00:46:43.165768 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-02 00:46:43.165775 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-02 00:46:43.165789 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-02 00:46:43.165801 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-02 00:46:43.165808 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-02 00:46:43.165815 | orchestrator | 2026-04-02 00:46:43.165822 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-02 00:46:43.165828 | orchestrator | Thursday 02 April 2026 00:46:34 +0000 (0:00:02.184) 0:00:11.679 ******** 2026-04-02 00:46:43.165835 | orchestrator | 2026-04-02 00:46:43.165841 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-02 00:46:43.165851 | orchestrator | Thursday 02 April 2026 00:46:35 +0000 (0:00:00.281) 0:00:11.960 ******** 2026-04-02 00:46:43.165855 | orchestrator | 2026-04-02 00:46:43.165859 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-02 00:46:43.165863 | orchestrator | Thursday 02 April 2026 00:46:35 +0000 (0:00:00.058) 0:00:12.019 ******** 2026-04-02 00:46:43.165867 | orchestrator | 2026-04-02 00:46:43.165870 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-04-02 00:46:43.165874 | orchestrator | Thursday 02 April 2026 00:46:35 +0000 (0:00:00.054) 0:00:12.074 ******** 2026-04-02 00:46:43.165878 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:46:43.165884 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:46:43.165900 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:46:43.165906 | orchestrator | 2026-04-02 00:46:43.165912 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-04-02 00:46:43.165919 | orchestrator | Thursday 02 April 2026 00:46:37 +0000 (0:00:02.337) 0:00:14.412 ******** 2026-04-02 00:46:43.165924 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:46:43.165930 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:46:43.165936 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:46:43.165942 | orchestrator | 2026-04-02 00:46:43.165949 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-02 00:46:43.165955 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-02 00:46:43.165962 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-02 00:46:43.165968 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-02 00:46:43.165974 | orchestrator | 2026-04-02 00:46:43.165981 | orchestrator | 2026-04-02 00:46:43.165987 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-02 00:46:43.165994 | orchestrator | Thursday 02 April 2026 00:46:41 +0000 (0:00:03.511) 0:00:17.923 ******** 2026-04-02 00:46:43.166001 | orchestrator | =============================================================================== 2026-04-02 00:46:43.166005 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 3.51s 2026-04-02 00:46:43.166009 | orchestrator | redis : Copying over redis config files --------------------------------- 3.33s 2026-04-02 00:46:43.166042 | orchestrator | redis : Copying over default config.json files -------------------------- 2.49s 2026-04-02 00:46:43.166053 | orchestrator | redis : Restart redis container ----------------------------------------- 2.34s 2026-04-02 00:46:43.166067 | orchestrator | redis : Check redis containers ------------------------------------------ 2.18s 2026-04-02 00:46:43.166074 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.91s 2026-04-02 00:46:43.166080 | orchestrator | redis : include_tasks --------------------------------------------------- 0.71s 2026-04-02 00:46:43.166087 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.45s 2026-04-02 00:46:43.166093 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.39s 2026-04-02 00:46:43.166099 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.23s 2026-04-02 00:46:43.166106 | orchestrator | 2026-04-02 00:46:43 | INFO  | Task cadb2cbf-0f5f-4d95-8a96-a5b07565ba81 is in state STARTED 2026-04-02 00:46:43.166112 | orchestrator | 2026-04-02 00:46:43 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:46:43.166513 | orchestrator | 2026-04-02 00:46:43 | INFO  | Task 806b1cc9-eaca-45b3-9c2c-824127bc5d9d is in state STARTED 2026-04-02 00:46:43.167843 | orchestrator | 2026-04-02 00:46:43 | INFO  | Task 7623ae9a-acfb-4a64-9631-61cbce034143 is in state STARTED 2026-04-02 00:46:43.167928 | orchestrator | 2026-04-02 00:46:43 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:46:46.208204 | orchestrator | 2026-04-02 00:46:46 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:46:46.208685 | orchestrator | 2026-04-02 00:46:46 | INFO  | Task cadb2cbf-0f5f-4d95-8a96-a5b07565ba81 is in state STARTED 2026-04-02 00:46:46.209456 | orchestrator | 2026-04-02 00:46:46 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:46:46.210183 | orchestrator | 2026-04-02 00:46:46 | INFO  | Task 806b1cc9-eaca-45b3-9c2c-824127bc5d9d is in state STARTED 2026-04-02 00:46:46.210859 | orchestrator | 2026-04-02 00:46:46 | INFO  | Task 7623ae9a-acfb-4a64-9631-61cbce034143 is in state STARTED 2026-04-02 00:46:46.210935 | orchestrator | 2026-04-02 00:46:46 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:46:49.241454 | orchestrator | 2026-04-02 00:46:49 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:46:49.242798 | orchestrator | 2026-04-02 00:46:49 | INFO  | Task cadb2cbf-0f5f-4d95-8a96-a5b07565ba81 is in state STARTED 2026-04-02 00:46:49.243942 | orchestrator | 2026-04-02 00:46:49 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:46:49.245092 | orchestrator | 2026-04-02 00:46:49 | INFO  | Task 806b1cc9-eaca-45b3-9c2c-824127bc5d9d is in state STARTED 2026-04-02 00:46:49.246790 | orchestrator | 2026-04-02 00:46:49 | INFO  | Task 7623ae9a-acfb-4a64-9631-61cbce034143 is in state STARTED 2026-04-02 00:46:49.246836 | orchestrator | 2026-04-02 00:46:49 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:46:52.279100 | orchestrator | 2026-04-02 00:46:52 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:46:52.281876 | orchestrator | 2026-04-02 00:46:52 | INFO  | Task cadb2cbf-0f5f-4d95-8a96-a5b07565ba81 is in state STARTED 2026-04-02 00:46:52.282436 | orchestrator | 2026-04-02 00:46:52 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:46:52.283110 | orchestrator | 2026-04-02 00:46:52 | INFO  | Task 806b1cc9-eaca-45b3-9c2c-824127bc5d9d is in state STARTED 2026-04-02 00:46:52.283706 | orchestrator | 2026-04-02 00:46:52 | INFO  | Task 7623ae9a-acfb-4a64-9631-61cbce034143 is in state STARTED 2026-04-02 00:46:52.283764 | orchestrator | 2026-04-02 00:46:52 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:46:55.319942 | orchestrator | 2026-04-02 00:46:55 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:46:55.320007 | orchestrator | 2026-04-02 00:46:55 | INFO  | Task cadb2cbf-0f5f-4d95-8a96-a5b07565ba81 is in state STARTED 2026-04-02 00:46:55.321523 | orchestrator | 2026-04-02 00:46:55 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:46:55.322010 | orchestrator | 2026-04-02 00:46:55 | INFO  | Task 806b1cc9-eaca-45b3-9c2c-824127bc5d9d is in state STARTED 2026-04-02 00:46:55.322683 | orchestrator | 2026-04-02 00:46:55 | INFO  | Task 7623ae9a-acfb-4a64-9631-61cbce034143 is in state STARTED 2026-04-02 00:46:55.322712 | orchestrator | 2026-04-02 00:46:55 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:46:58.363868 | orchestrator | 2026-04-02 00:46:58 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:46:58.364995 | orchestrator | 2026-04-02 00:46:58 | INFO  | Task cadb2cbf-0f5f-4d95-8a96-a5b07565ba81 is in state STARTED 2026-04-02 00:46:58.365948 | orchestrator | 2026-04-02 00:46:58 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:46:58.367075 | orchestrator | 2026-04-02 00:46:58 | INFO  | Task 806b1cc9-eaca-45b3-9c2c-824127bc5d9d is in state STARTED 2026-04-02 00:46:58.368215 | orchestrator | 2026-04-02 00:46:58 | INFO  | Task 7623ae9a-acfb-4a64-9631-61cbce034143 is in state STARTED 2026-04-02 00:46:58.368305 | orchestrator | 2026-04-02 00:46:58 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:47:01.654499 | orchestrator | 2026-04-02 00:47:01 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:47:01.655274 | orchestrator | 2026-04-02 00:47:01 | INFO  | Task cadb2cbf-0f5f-4d95-8a96-a5b07565ba81 is in state STARTED 2026-04-02 00:47:01.655996 | orchestrator | 2026-04-02 00:47:01 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:47:01.656876 | orchestrator | 2026-04-02 00:47:01 | INFO  | Task 806b1cc9-eaca-45b3-9c2c-824127bc5d9d is in state STARTED 2026-04-02 00:47:01.657977 | orchestrator | 2026-04-02 00:47:01 | INFO  | Task 7623ae9a-acfb-4a64-9631-61cbce034143 is in state STARTED 2026-04-02 00:47:01.658006 | orchestrator | 2026-04-02 00:47:01 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:47:04.744788 | orchestrator | 2026-04-02 00:47:04 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:47:04.744843 | orchestrator | 2026-04-02 00:47:04 | INFO  | Task cadb2cbf-0f5f-4d95-8a96-a5b07565ba81 is in state STARTED 2026-04-02 00:47:04.744851 | orchestrator | 2026-04-02 00:47:04 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:47:04.744857 | orchestrator | 2026-04-02 00:47:04 | INFO  | Task 806b1cc9-eaca-45b3-9c2c-824127bc5d9d is in state STARTED 2026-04-02 00:47:04.744861 | orchestrator | 2026-04-02 00:47:04 | INFO  | Task 7623ae9a-acfb-4a64-9631-61cbce034143 is in state STARTED 2026-04-02 00:47:04.744865 | orchestrator | 2026-04-02 00:47:04 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:47:07.763263 | orchestrator | 2026-04-02 00:47:07 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:47:07.764278 | orchestrator | 2026-04-02 00:47:07 | INFO  | Task cadb2cbf-0f5f-4d95-8a96-a5b07565ba81 is in state STARTED 2026-04-02 00:47:07.765395 | orchestrator | 2026-04-02 00:47:07 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:47:07.767156 | orchestrator | 2026-04-02 00:47:07 | INFO  | Task 806b1cc9-eaca-45b3-9c2c-824127bc5d9d is in state STARTED 2026-04-02 00:47:07.769624 | orchestrator | 2026-04-02 00:47:07 | INFO  | Task 7623ae9a-acfb-4a64-9631-61cbce034143 is in state STARTED 2026-04-02 00:47:07.769683 | orchestrator | 2026-04-02 00:47:07 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:47:10.818896 | orchestrator | 2026-04-02 00:47:10 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:47:10.822633 | orchestrator | 2026-04-02 00:47:10 | INFO  | Task cadb2cbf-0f5f-4d95-8a96-a5b07565ba81 is in state STARTED 2026-04-02 00:47:10.826034 | orchestrator | 2026-04-02 00:47:10 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:47:10.827937 | orchestrator | 2026-04-02 00:47:10 | INFO  | Task 806b1cc9-eaca-45b3-9c2c-824127bc5d9d is in state STARTED 2026-04-02 00:47:10.830846 | orchestrator | 2026-04-02 00:47:10 | INFO  | Task 7623ae9a-acfb-4a64-9631-61cbce034143 is in state STARTED 2026-04-02 00:47:10.830893 | orchestrator | 2026-04-02 00:47:10 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:47:13.887591 | orchestrator | 2026-04-02 00:47:13 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:47:13.888488 | orchestrator | 2026-04-02 00:47:13 | INFO  | Task cadb2cbf-0f5f-4d95-8a96-a5b07565ba81 is in state STARTED 2026-04-02 00:47:13.889704 | orchestrator | 2026-04-02 00:47:13 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:47:13.891986 | orchestrator | 2026-04-02 00:47:13 | INFO  | Task 806b1cc9-eaca-45b3-9c2c-824127bc5d9d is in state STARTED 2026-04-02 00:47:13.893104 | orchestrator | 2026-04-02 00:47:13 | INFO  | Task 7623ae9a-acfb-4a64-9631-61cbce034143 is in state STARTED 2026-04-02 00:47:13.893139 | orchestrator | 2026-04-02 00:47:13 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:47:16.962784 | orchestrator | 2026-04-02 00:47:16 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:47:16.962854 | orchestrator | 2026-04-02 00:47:16 | INFO  | Task cadb2cbf-0f5f-4d95-8a96-a5b07565ba81 is in state STARTED 2026-04-02 00:47:16.962865 | orchestrator | 2026-04-02 00:47:16 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:47:16.962872 | orchestrator | 2026-04-02 00:47:16 | INFO  | Task 806b1cc9-eaca-45b3-9c2c-824127bc5d9d is in state STARTED 2026-04-02 00:47:16.962879 | orchestrator | 2026-04-02 00:47:16 | INFO  | Task 7623ae9a-acfb-4a64-9631-61cbce034143 is in state STARTED 2026-04-02 00:47:16.962886 | orchestrator | 2026-04-02 00:47:16 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:47:20.215509 | orchestrator | 2026-04-02 00:47:20 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:47:20.216665 | orchestrator | 2026-04-02 00:47:20 | INFO  | Task cadb2cbf-0f5f-4d95-8a96-a5b07565ba81 is in state STARTED 2026-04-02 00:47:20.217135 | orchestrator | 2026-04-02 00:47:20 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:47:20.217976 | orchestrator | 2026-04-02 00:47:20 | INFO  | Task 806b1cc9-eaca-45b3-9c2c-824127bc5d9d is in state STARTED 2026-04-02 00:47:20.218697 | orchestrator | 2026-04-02 00:47:20 | INFO  | Task 7623ae9a-acfb-4a64-9631-61cbce034143 is in state STARTED 2026-04-02 00:47:20.218751 | orchestrator | 2026-04-02 00:47:20 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:47:23.371015 | orchestrator | 2026-04-02 00:47:23 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:47:23.371578 | orchestrator | 2026-04-02 00:47:23 | INFO  | Task cadb2cbf-0f5f-4d95-8a96-a5b07565ba81 is in state STARTED 2026-04-02 00:47:23.372557 | orchestrator | 2026-04-02 00:47:23 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:47:23.373378 | orchestrator | 2026-04-02 00:47:23 | INFO  | Task 806b1cc9-eaca-45b3-9c2c-824127bc5d9d is in state STARTED 2026-04-02 00:47:23.373878 | orchestrator | 2026-04-02 00:47:23 | INFO  | Task 7623ae9a-acfb-4a64-9631-61cbce034143 is in state STARTED 2026-04-02 00:47:23.373979 | orchestrator | 2026-04-02 00:47:23 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:47:26.404605 | orchestrator | 2026-04-02 00:47:26 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:47:26.406062 | orchestrator | 2026-04-02 00:47:26 | INFO  | Task cadb2cbf-0f5f-4d95-8a96-a5b07565ba81 is in state STARTED 2026-04-02 00:47:26.406663 | orchestrator | 2026-04-02 00:47:26 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:47:26.407451 | orchestrator | 2026-04-02 00:47:26 | INFO  | Task 806b1cc9-eaca-45b3-9c2c-824127bc5d9d is in state STARTED 2026-04-02 00:47:26.409975 | orchestrator | 2026-04-02 00:47:26 | INFO  | Task 7623ae9a-acfb-4a64-9631-61cbce034143 is in state STARTED 2026-04-02 00:47:26.410041 | orchestrator | 2026-04-02 00:47:26 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:47:29.468510 | orchestrator | 2026-04-02 00:47:29 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:47:29.468563 | orchestrator | 2026-04-02 00:47:29 | INFO  | Task cadb2cbf-0f5f-4d95-8a96-a5b07565ba81 is in state SUCCESS 2026-04-02 00:47:29.468571 | orchestrator | 2026-04-02 00:47:29 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:47:29.469582 | orchestrator | 2026-04-02 00:47:29.469658 | orchestrator | 2026-04-02 00:47:29.469669 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-02 00:47:29.469676 | orchestrator | 2026-04-02 00:47:29.469679 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-02 00:47:29.469682 | orchestrator | Thursday 02 April 2026 00:46:23 +0000 (0:00:00.284) 0:00:00.284 ******** 2026-04-02 00:47:29.469707 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:47:29.469712 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:47:29.469715 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:47:29.469718 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:47:29.469721 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:47:29.469724 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:47:29.469728 | orchestrator | 2026-04-02 00:47:29.469731 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-02 00:47:29.469734 | orchestrator | Thursday 02 April 2026 00:46:24 +0000 (0:00:00.587) 0:00:00.872 ******** 2026-04-02 00:47:29.469738 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-02 00:47:29.469741 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-02 00:47:29.469744 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-02 00:47:29.469747 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-02 00:47:29.469750 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-02 00:47:29.469754 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-02 00:47:29.469757 | orchestrator | 2026-04-02 00:47:29.469760 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-04-02 00:47:29.469772 | orchestrator | 2026-04-02 00:47:29.469775 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-04-02 00:47:29.469783 | orchestrator | Thursday 02 April 2026 00:46:25 +0000 (0:00:00.961) 0:00:01.833 ******** 2026-04-02 00:47:29.469787 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 00:47:29.469791 | orchestrator | 2026-04-02 00:47:29.469794 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-02 00:47:29.469797 | orchestrator | Thursday 02 April 2026 00:46:26 +0000 (0:00:01.027) 0:00:02.861 ******** 2026-04-02 00:47:29.469801 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-04-02 00:47:29.469804 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-04-02 00:47:29.469807 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-04-02 00:47:29.469810 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-04-02 00:47:29.469813 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-04-02 00:47:29.469817 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-04-02 00:47:29.469820 | orchestrator | 2026-04-02 00:47:29.469828 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-02 00:47:29.469832 | orchestrator | Thursday 02 April 2026 00:46:27 +0000 (0:00:01.476) 0:00:04.337 ******** 2026-04-02 00:47:29.469835 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-04-02 00:47:29.469838 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-04-02 00:47:29.469841 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-04-02 00:47:29.469844 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-04-02 00:47:29.469847 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-04-02 00:47:29.469850 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-04-02 00:47:29.469853 | orchestrator | 2026-04-02 00:47:29.469857 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-02 00:47:29.469864 | orchestrator | Thursday 02 April 2026 00:46:29 +0000 (0:00:01.653) 0:00:05.990 ******** 2026-04-02 00:47:29.469868 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-04-02 00:47:29.469910 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:47:29.469915 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-04-02 00:47:29.469918 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:47:29.469921 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-04-02 00:47:29.469925 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:47:29.469928 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-04-02 00:47:29.469931 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:47:29.469934 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-04-02 00:47:29.469937 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:47:29.469940 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-04-02 00:47:29.469943 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:47:29.469946 | orchestrator | 2026-04-02 00:47:29.469949 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-04-02 00:47:29.469952 | orchestrator | Thursday 02 April 2026 00:46:31 +0000 (0:00:02.176) 0:00:08.166 ******** 2026-04-02 00:47:29.469956 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:47:29.469959 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:47:29.469962 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:47:29.469965 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:47:29.469968 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:47:29.469971 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:47:29.469974 | orchestrator | 2026-04-02 00:47:29.469977 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-04-02 00:47:29.469980 | orchestrator | Thursday 02 April 2026 00:46:32 +0000 (0:00:00.663) 0:00:08.830 ******** 2026-04-02 00:47:29.469995 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-02 00:47:29.470001 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-02 00:47:29.470005 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-02 00:47:29.470008 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-02 00:47:29.470046 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-02 00:47:29.470051 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-02 00:47:29.470065 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-02 00:47:29.470072 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-02 00:47:29.470078 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-02 00:47:29.470085 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-02 00:47:29.470094 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-02 00:47:29.470103 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-02 00:47:29.470119 | orchestrator | 2026-04-02 00:47:29.470124 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-04-02 00:47:29.470129 | orchestrator | Thursday 02 April 2026 00:46:34 +0000 (0:00:02.124) 0:00:10.955 ******** 2026-04-02 00:47:29.470133 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-02 00:47:29.470141 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-02 00:47:29.470145 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-02 00:47:29.470150 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-02 00:47:29.470155 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-02 00:47:29.470173 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-02 00:47:29.470179 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-02 00:47:29.470186 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-02 00:47:29.470192 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-02 00:47:29.470239 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-02 00:47:29.470246 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-02 00:47:29.470253 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-02 00:47:29.470257 | orchestrator | 2026-04-02 00:47:29.470260 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-04-02 00:47:29.470263 | orchestrator | Thursday 02 April 2026 00:46:36 +0000 (0:00:02.573) 0:00:13.529 ******** 2026-04-02 00:47:29.470266 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:47:29.470269 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:47:29.470272 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:47:29.470276 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:47:29.470279 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:47:29.470282 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:47:29.470285 | orchestrator | 2026-04-02 00:47:29.470288 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2026-04-02 00:47:29.470291 | orchestrator | Thursday 02 April 2026 00:46:37 +0000 (0:00:00.617) 0:00:14.146 ******** 2026-04-02 00:47:29.470296 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-02 00:47:29.470300 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-02 00:47:29.470303 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-02 00:47:29.470354 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-02 00:47:29.470362 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-02 00:47:29.470367 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-02 00:47:29.470370 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-02 00:47:29.470373 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-02 00:47:29.470381 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-02 00:47:29.470384 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-02 00:47:29.470390 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-02 00:47:29.470395 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-02 00:47:29.470398 | orchestrator | 2026-04-02 00:47:29.470401 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-02 00:47:29.470405 | orchestrator | Thursday 02 April 2026 00:46:40 +0000 (0:00:02.761) 0:00:16.908 ******** 2026-04-02 00:47:29.470408 | orchestrator | 2026-04-02 00:47:29.470411 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-02 00:47:29.470414 | orchestrator | Thursday 02 April 2026 00:46:40 +0000 (0:00:00.144) 0:00:17.052 ******** 2026-04-02 00:47:29.470417 | orchestrator | 2026-04-02 00:47:29.470420 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-02 00:47:29.470423 | orchestrator | Thursday 02 April 2026 00:46:40 +0000 (0:00:00.289) 0:00:17.342 ******** 2026-04-02 00:47:29.470426 | orchestrator | 2026-04-02 00:47:29.470429 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-02 00:47:29.470433 | orchestrator | Thursday 02 April 2026 00:46:41 +0000 (0:00:00.348) 0:00:17.691 ******** 2026-04-02 00:47:29.470436 | orchestrator | 2026-04-02 00:47:29.470439 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-02 00:47:29.470444 | orchestrator | Thursday 02 April 2026 00:46:41 +0000 (0:00:00.314) 0:00:18.005 ******** 2026-04-02 00:47:29.470447 | orchestrator | 2026-04-02 00:47:29.470450 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-02 00:47:29.470459 | orchestrator | Thursday 02 April 2026 00:46:41 +0000 (0:00:00.139) 0:00:18.144 ******** 2026-04-02 00:47:29.470462 | orchestrator | 2026-04-02 00:47:29.470465 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-04-02 00:47:29.470468 | orchestrator | Thursday 02 April 2026 00:46:41 +0000 (0:00:00.167) 0:00:18.311 ******** 2026-04-02 00:47:29.470475 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:47:29.470479 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:47:29.470482 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:47:29.470485 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:47:29.470488 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:47:29.470491 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:47:29.470494 | orchestrator | 2026-04-02 00:47:29.470497 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-04-02 00:47:29.470501 | orchestrator | Thursday 02 April 2026 00:46:51 +0000 (0:00:09.225) 0:00:27.537 ******** 2026-04-02 00:47:29.470504 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:47:29.470507 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:47:29.470511 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:47:29.470514 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:47:29.470517 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:47:29.470520 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:47:29.470523 | orchestrator | 2026-04-02 00:47:29.470526 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-04-02 00:47:29.470529 | orchestrator | Thursday 02 April 2026 00:46:52 +0000 (0:00:01.451) 0:00:28.988 ******** 2026-04-02 00:47:29.470532 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:47:29.470535 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:47:29.470539 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:47:29.470542 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:47:29.470545 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:47:29.470548 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:47:29.470551 | orchestrator | 2026-04-02 00:47:29.470554 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-04-02 00:47:29.470558 | orchestrator | Thursday 02 April 2026 00:47:02 +0000 (0:00:10.072) 0:00:39.060 ******** 2026-04-02 00:47:29.470561 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-04-02 00:47:29.470564 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-04-02 00:47:29.470567 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-04-02 00:47:29.470570 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-04-02 00:47:29.470574 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-04-02 00:47:29.470579 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-04-02 00:47:29.470582 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-04-02 00:47:29.470585 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-04-02 00:47:29.470588 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-04-02 00:47:29.470591 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-04-02 00:47:29.470594 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-02 00:47:29.470601 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-04-02 00:47:29.470604 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-04-02 00:47:29.470607 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-02 00:47:29.470612 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-02 00:47:29.470615 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-02 00:47:29.470618 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-02 00:47:29.470621 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-02 00:47:29.470625 | orchestrator | 2026-04-02 00:47:29.470628 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-04-02 00:47:29.470631 | orchestrator | Thursday 02 April 2026 00:47:11 +0000 (0:00:08.867) 0:00:47.927 ******** 2026-04-02 00:47:29.470634 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-04-02 00:47:29.470637 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:47:29.470640 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-04-02 00:47:29.470643 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:47:29.470647 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-04-02 00:47:29.470650 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:47:29.470653 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-04-02 00:47:29.470656 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-04-02 00:47:29.470659 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-04-02 00:47:29.470662 | orchestrator | 2026-04-02 00:47:29.470666 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-04-02 00:47:29.470669 | orchestrator | Thursday 02 April 2026 00:47:14 +0000 (0:00:02.765) 0:00:50.693 ******** 2026-04-02 00:47:29.470672 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-04-02 00:47:29.470675 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:47:29.470678 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-04-02 00:47:29.470681 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:47:29.470684 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-04-02 00:47:29.470687 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:47:29.470691 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-04-02 00:47:29.470694 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-04-02 00:47:29.470697 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-04-02 00:47:29.470700 | orchestrator | 2026-04-02 00:47:29.470703 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-04-02 00:47:29.470706 | orchestrator | Thursday 02 April 2026 00:47:19 +0000 (0:00:05.492) 0:00:56.186 ******** 2026-04-02 00:47:29.470709 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:47:29.470712 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:47:29.470715 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:47:29.470719 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:47:29.470722 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:47:29.470725 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:47:29.470728 | orchestrator | 2026-04-02 00:47:29.470731 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-02 00:47:29.470734 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-02 00:47:29.470738 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-02 00:47:29.470743 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-02 00:47:29.470746 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-02 00:47:29.470749 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-02 00:47:29.470754 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-02 00:47:29.470757 | orchestrator | 2026-04-02 00:47:29.470761 | orchestrator | 2026-04-02 00:47:29.470764 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-02 00:47:29.470767 | orchestrator | Thursday 02 April 2026 00:47:28 +0000 (0:00:08.870) 0:01:05.056 ******** 2026-04-02 00:47:29.470770 | orchestrator | =============================================================================== 2026-04-02 00:47:29.470773 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 18.94s 2026-04-02 00:47:29.470776 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 9.23s 2026-04-02 00:47:29.470779 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 8.87s 2026-04-02 00:47:29.470782 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 5.49s 2026-04-02 00:47:29.470785 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.77s 2026-04-02 00:47:29.470788 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 2.76s 2026-04-02 00:47:29.470791 | orchestrator | openvswitch : Copying over config.json files for services --------------- 2.57s 2026-04-02 00:47:29.470795 | orchestrator | module-load : Drop module persistence ----------------------------------- 2.18s 2026-04-02 00:47:29.470798 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.12s 2026-04-02 00:47:29.470802 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.65s 2026-04-02 00:47:29.470806 | orchestrator | module-load : Load modules ---------------------------------------------- 1.48s 2026-04-02 00:47:29.470809 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.45s 2026-04-02 00:47:29.470812 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.40s 2026-04-02 00:47:29.470815 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.03s 2026-04-02 00:47:29.470818 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.96s 2026-04-02 00:47:29.470821 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.66s 2026-04-02 00:47:29.470824 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 0.62s 2026-04-02 00:47:29.470827 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.59s 2026-04-02 00:47:29.470831 | orchestrator | 2026-04-02 00:47:29 | INFO  | Task 806b1cc9-eaca-45b3-9c2c-824127bc5d9d is in state STARTED 2026-04-02 00:47:29.471057 | orchestrator | 2026-04-02 00:47:29 | INFO  | Task 7623ae9a-acfb-4a64-9631-61cbce034143 is in state STARTED 2026-04-02 00:47:29.471094 | orchestrator | 2026-04-02 00:47:29 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:47:32.506689 | orchestrator | 2026-04-02 00:47:32 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:47:32.507704 | orchestrator | 2026-04-02 00:47:32 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:47:32.508809 | orchestrator | 2026-04-02 00:47:32 | INFO  | Task 806b1cc9-eaca-45b3-9c2c-824127bc5d9d is in state STARTED 2026-04-02 00:47:32.510147 | orchestrator | 2026-04-02 00:47:32 | INFO  | Task 7623ae9a-acfb-4a64-9631-61cbce034143 is in state STARTED 2026-04-02 00:47:32.511456 | orchestrator | 2026-04-02 00:47:32 | INFO  | Task 51f87394-dc57-4f84-9b67-61d3b6ced042 is in state STARTED 2026-04-02 00:47:32.511731 | orchestrator | 2026-04-02 00:47:32 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:47:35.545583 | orchestrator | 2026-04-02 00:47:35 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:47:35.546111 | orchestrator | 2026-04-02 00:47:35 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:47:35.547093 | orchestrator | 2026-04-02 00:47:35 | INFO  | Task 806b1cc9-eaca-45b3-9c2c-824127bc5d9d is in state STARTED 2026-04-02 00:47:35.547914 | orchestrator | 2026-04-02 00:47:35 | INFO  | Task 7623ae9a-acfb-4a64-9631-61cbce034143 is in state STARTED 2026-04-02 00:47:35.548848 | orchestrator | 2026-04-02 00:47:35 | INFO  | Task 51f87394-dc57-4f84-9b67-61d3b6ced042 is in state STARTED 2026-04-02 00:47:35.548917 | orchestrator | 2026-04-02 00:47:35 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:47:38.585239 | orchestrator | 2026-04-02 00:47:38 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:47:38.587282 | orchestrator | 2026-04-02 00:47:38 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:47:38.588944 | orchestrator | 2026-04-02 00:47:38 | INFO  | Task 806b1cc9-eaca-45b3-9c2c-824127bc5d9d is in state STARTED 2026-04-02 00:47:38.590212 | orchestrator | 2026-04-02 00:47:38 | INFO  | Task 7623ae9a-acfb-4a64-9631-61cbce034143 is in state STARTED 2026-04-02 00:47:38.591275 | orchestrator | 2026-04-02 00:47:38 | INFO  | Task 51f87394-dc57-4f84-9b67-61d3b6ced042 is in state STARTED 2026-04-02 00:47:38.591396 | orchestrator | 2026-04-02 00:47:38 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:47:41.621662 | orchestrator | 2026-04-02 00:47:41 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:47:41.622541 | orchestrator | 2026-04-02 00:47:41 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:47:41.624241 | orchestrator | 2026-04-02 00:47:41 | INFO  | Task 806b1cc9-eaca-45b3-9c2c-824127bc5d9d is in state STARTED 2026-04-02 00:47:41.626530 | orchestrator | 2026-04-02 00:47:41 | INFO  | Task 7623ae9a-acfb-4a64-9631-61cbce034143 is in state STARTED 2026-04-02 00:47:41.627894 | orchestrator | 2026-04-02 00:47:41 | INFO  | Task 51f87394-dc57-4f84-9b67-61d3b6ced042 is in state STARTED 2026-04-02 00:47:41.628509 | orchestrator | 2026-04-02 00:47:41 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:47:44.663283 | orchestrator | 2026-04-02 00:47:44 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:47:44.665239 | orchestrator | 2026-04-02 00:47:44 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:47:44.667404 | orchestrator | 2026-04-02 00:47:44 | INFO  | Task 806b1cc9-eaca-45b3-9c2c-824127bc5d9d is in state STARTED 2026-04-02 00:47:44.669731 | orchestrator | 2026-04-02 00:47:44 | INFO  | Task 7623ae9a-acfb-4a64-9631-61cbce034143 is in state STARTED 2026-04-02 00:47:44.671542 | orchestrator | 2026-04-02 00:47:44 | INFO  | Task 51f87394-dc57-4f84-9b67-61d3b6ced042 is in state STARTED 2026-04-02 00:47:44.671585 | orchestrator | 2026-04-02 00:47:44 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:47:47.711214 | orchestrator | 2026-04-02 00:47:47 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:47:47.711987 | orchestrator | 2026-04-02 00:47:47 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:47:47.713055 | orchestrator | 2026-04-02 00:47:47 | INFO  | Task 806b1cc9-eaca-45b3-9c2c-824127bc5d9d is in state STARTED 2026-04-02 00:47:47.714395 | orchestrator | 2026-04-02 00:47:47 | INFO  | Task 7623ae9a-acfb-4a64-9631-61cbce034143 is in state STARTED 2026-04-02 00:47:47.715341 | orchestrator | 2026-04-02 00:47:47 | INFO  | Task 51f87394-dc57-4f84-9b67-61d3b6ced042 is in state STARTED 2026-04-02 00:47:47.715410 | orchestrator | 2026-04-02 00:47:47 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:47:50.762037 | orchestrator | 2026-04-02 00:47:50 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:47:50.764056 | orchestrator | 2026-04-02 00:47:50 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:47:50.765892 | orchestrator | 2026-04-02 00:47:50 | INFO  | Task 806b1cc9-eaca-45b3-9c2c-824127bc5d9d is in state STARTED 2026-04-02 00:47:50.767364 | orchestrator | 2026-04-02 00:47:50 | INFO  | Task 7623ae9a-acfb-4a64-9631-61cbce034143 is in state STARTED 2026-04-02 00:47:50.768890 | orchestrator | 2026-04-02 00:47:50 | INFO  | Task 51f87394-dc57-4f84-9b67-61d3b6ced042 is in state STARTED 2026-04-02 00:47:50.769034 | orchestrator | 2026-04-02 00:47:50 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:47:53.807520 | orchestrator | 2026-04-02 00:47:53 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:47:53.807861 | orchestrator | 2026-04-02 00:47:53 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:47:53.810789 | orchestrator | 2026-04-02 00:47:53 | INFO  | Task 806b1cc9-eaca-45b3-9c2c-824127bc5d9d is in state STARTED 2026-04-02 00:47:53.812028 | orchestrator | 2026-04-02 00:47:53 | INFO  | Task 7623ae9a-acfb-4a64-9631-61cbce034143 is in state STARTED 2026-04-02 00:47:53.813135 | orchestrator | 2026-04-02 00:47:53 | INFO  | Task 51f87394-dc57-4f84-9b67-61d3b6ced042 is in state STARTED 2026-04-02 00:47:53.813627 | orchestrator | 2026-04-02 00:47:53 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:47:56.842653 | orchestrator | 2026-04-02 00:47:56 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:47:56.843412 | orchestrator | 2026-04-02 00:47:56 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:47:56.844620 | orchestrator | 2026-04-02 00:47:56 | INFO  | Task 806b1cc9-eaca-45b3-9c2c-824127bc5d9d is in state STARTED 2026-04-02 00:47:56.845572 | orchestrator | 2026-04-02 00:47:56 | INFO  | Task 7623ae9a-acfb-4a64-9631-61cbce034143 is in state STARTED 2026-04-02 00:47:56.846324 | orchestrator | 2026-04-02 00:47:56 | INFO  | Task 51f87394-dc57-4f84-9b67-61d3b6ced042 is in state STARTED 2026-04-02 00:47:56.846679 | orchestrator | 2026-04-02 00:47:56 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:47:59.887299 | orchestrator | 2026-04-02 00:47:59 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:47:59.887508 | orchestrator | 2026-04-02 00:47:59 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:47:59.888812 | orchestrator | 2026-04-02 00:47:59 | INFO  | Task 806b1cc9-eaca-45b3-9c2c-824127bc5d9d is in state STARTED 2026-04-02 00:47:59.889438 | orchestrator | 2026-04-02 00:47:59 | INFO  | Task 7623ae9a-acfb-4a64-9631-61cbce034143 is in state STARTED 2026-04-02 00:47:59.890859 | orchestrator | 2026-04-02 00:47:59 | INFO  | Task 51f87394-dc57-4f84-9b67-61d3b6ced042 is in state STARTED 2026-04-02 00:47:59.890964 | orchestrator | 2026-04-02 00:47:59 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:48:02.929003 | orchestrator | 2026-04-02 00:48:02 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:48:02.929264 | orchestrator | 2026-04-02 00:48:02 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:48:02.929910 | orchestrator | 2026-04-02 00:48:02 | INFO  | Task 806b1cc9-eaca-45b3-9c2c-824127bc5d9d is in state STARTED 2026-04-02 00:48:02.930560 | orchestrator | 2026-04-02 00:48:02 | INFO  | Task 7623ae9a-acfb-4a64-9631-61cbce034143 is in state STARTED 2026-04-02 00:48:02.931539 | orchestrator | 2026-04-02 00:48:02 | INFO  | Task 51f87394-dc57-4f84-9b67-61d3b6ced042 is in state STARTED 2026-04-02 00:48:02.931755 | orchestrator | 2026-04-02 00:48:02 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:48:05.956055 | orchestrator | 2026-04-02 00:48:05 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:48:05.958039 | orchestrator | 2026-04-02 00:48:05 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:48:05.958104 | orchestrator | 2026-04-02 00:48:05 | INFO  | Task 806b1cc9-eaca-45b3-9c2c-824127bc5d9d is in state STARTED 2026-04-02 00:48:05.958540 | orchestrator | 2026-04-02 00:48:05 | INFO  | Task 7623ae9a-acfb-4a64-9631-61cbce034143 is in state STARTED 2026-04-02 00:48:05.959623 | orchestrator | 2026-04-02 00:48:05 | INFO  | Task 51f87394-dc57-4f84-9b67-61d3b6ced042 is in state STARTED 2026-04-02 00:48:05.959794 | orchestrator | 2026-04-02 00:48:05 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:48:08.993465 | orchestrator | 2026-04-02 00:48:08 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:48:08.994412 | orchestrator | 2026-04-02 00:48:08 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:48:08.995392 | orchestrator | 2026-04-02 00:48:08 | INFO  | Task 806b1cc9-eaca-45b3-9c2c-824127bc5d9d is in state STARTED 2026-04-02 00:48:08.996328 | orchestrator | 2026-04-02 00:48:08 | INFO  | Task 7623ae9a-acfb-4a64-9631-61cbce034143 is in state STARTED 2026-04-02 00:48:08.996904 | orchestrator | 2026-04-02 00:48:08 | INFO  | Task 51f87394-dc57-4f84-9b67-61d3b6ced042 is in state STARTED 2026-04-02 00:48:08.997045 | orchestrator | 2026-04-02 00:48:08 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:48:12.025886 | orchestrator | 2026-04-02 00:48:12 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:48:12.027205 | orchestrator | 2026-04-02 00:48:12 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:48:12.027710 | orchestrator | 2026-04-02 00:48:12 | INFO  | Task 806b1cc9-eaca-45b3-9c2c-824127bc5d9d is in state STARTED 2026-04-02 00:48:12.028759 | orchestrator | 2026-04-02 00:48:12 | INFO  | Task 7623ae9a-acfb-4a64-9631-61cbce034143 is in state STARTED 2026-04-02 00:48:12.031038 | orchestrator | 2026-04-02 00:48:12 | INFO  | Task 51f87394-dc57-4f84-9b67-61d3b6ced042 is in state STARTED 2026-04-02 00:48:12.031220 | orchestrator | 2026-04-02 00:48:12 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:48:15.153436 | orchestrator | 2026-04-02 00:48:15 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:48:15.154630 | orchestrator | 2026-04-02 00:48:15 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:48:15.156529 | orchestrator | 2026-04-02 00:48:15 | INFO  | Task 806b1cc9-eaca-45b3-9c2c-824127bc5d9d is in state STARTED 2026-04-02 00:48:15.159107 | orchestrator | 2026-04-02 00:48:15 | INFO  | Task 7623ae9a-acfb-4a64-9631-61cbce034143 is in state STARTED 2026-04-02 00:48:15.160935 | orchestrator | 2026-04-02 00:48:15 | INFO  | Task 51f87394-dc57-4f84-9b67-61d3b6ced042 is in state STARTED 2026-04-02 00:48:15.160956 | orchestrator | 2026-04-02 00:48:15 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:48:18.211221 | orchestrator | 2026-04-02 00:48:18 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:48:18.220415 | orchestrator | 2026-04-02 00:48:18 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:48:18.232817 | orchestrator | 2026-04-02 00:48:18 | INFO  | Task 806b1cc9-eaca-45b3-9c2c-824127bc5d9d is in state STARTED 2026-04-02 00:48:18.247764 | orchestrator | 2026-04-02 00:48:18 | INFO  | Task 7623ae9a-acfb-4a64-9631-61cbce034143 is in state STARTED 2026-04-02 00:48:18.247858 | orchestrator | 2026-04-02 00:48:18 | INFO  | Task 51f87394-dc57-4f84-9b67-61d3b6ced042 is in state STARTED 2026-04-02 00:48:18.247869 | orchestrator | 2026-04-02 00:48:18 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:48:21.289309 | orchestrator | 2026-04-02 00:48:21 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:48:21.292443 | orchestrator | 2026-04-02 00:48:21 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:48:21.294689 | orchestrator | 2026-04-02 00:48:21 | INFO  | Task 806b1cc9-eaca-45b3-9c2c-824127bc5d9d is in state STARTED 2026-04-02 00:48:21.296261 | orchestrator | 2026-04-02 00:48:21 | INFO  | Task 7623ae9a-acfb-4a64-9631-61cbce034143 is in state STARTED 2026-04-02 00:48:21.297524 | orchestrator | 2026-04-02 00:48:21 | INFO  | Task 51f87394-dc57-4f84-9b67-61d3b6ced042 is in state STARTED 2026-04-02 00:48:21.297633 | orchestrator | 2026-04-02 00:48:21 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:48:24.326747 | orchestrator | 2026-04-02 00:48:24 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:48:24.327167 | orchestrator | 2026-04-02 00:48:24 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:48:24.328312 | orchestrator | 2026-04-02 00:48:24 | INFO  | Task 806b1cc9-eaca-45b3-9c2c-824127bc5d9d is in state STARTED 2026-04-02 00:48:24.329760 | orchestrator | 2026-04-02 00:48:24 | INFO  | Task 7623ae9a-acfb-4a64-9631-61cbce034143 is in state STARTED 2026-04-02 00:48:24.331007 | orchestrator | 2026-04-02 00:48:24 | INFO  | Task 51f87394-dc57-4f84-9b67-61d3b6ced042 is in state STARTED 2026-04-02 00:48:24.331031 | orchestrator | 2026-04-02 00:48:24 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:48:27.359822 | orchestrator | 2026-04-02 00:48:27 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:48:27.360322 | orchestrator | 2026-04-02 00:48:27 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:48:27.361240 | orchestrator | 2026-04-02 00:48:27 | INFO  | Task 806b1cc9-eaca-45b3-9c2c-824127bc5d9d is in state STARTED 2026-04-02 00:48:27.361814 | orchestrator | 2026-04-02 00:48:27 | INFO  | Task 7623ae9a-acfb-4a64-9631-61cbce034143 is in state STARTED 2026-04-02 00:48:27.362724 | orchestrator | 2026-04-02 00:48:27 | INFO  | Task 51f87394-dc57-4f84-9b67-61d3b6ced042 is in state STARTED 2026-04-02 00:48:27.362817 | orchestrator | 2026-04-02 00:48:27 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:48:30.388273 | orchestrator | 2026-04-02 00:48:30 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:48:30.389042 | orchestrator | 2026-04-02 00:48:30 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:48:30.389502 | orchestrator | 2026-04-02 00:48:30 | INFO  | Task 806b1cc9-eaca-45b3-9c2c-824127bc5d9d is in state STARTED 2026-04-02 00:48:30.391186 | orchestrator | 2026-04-02 00:48:30 | INFO  | Task 7623ae9a-acfb-4a64-9631-61cbce034143 is in state STARTED 2026-04-02 00:48:30.393312 | orchestrator | 2026-04-02 00:48:30 | INFO  | Task 51f87394-dc57-4f84-9b67-61d3b6ced042 is in state STARTED 2026-04-02 00:48:30.393362 | orchestrator | 2026-04-02 00:48:30 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:48:33.413037 | orchestrator | 2026-04-02 00:48:33 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:48:33.413496 | orchestrator | 2026-04-02 00:48:33 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:48:33.413997 | orchestrator | 2026-04-02 00:48:33 | INFO  | Task 806b1cc9-eaca-45b3-9c2c-824127bc5d9d is in state STARTED 2026-04-02 00:48:33.414694 | orchestrator | 2026-04-02 00:48:33 | INFO  | Task 7623ae9a-acfb-4a64-9631-61cbce034143 is in state STARTED 2026-04-02 00:48:33.415403 | orchestrator | 2026-04-02 00:48:33 | INFO  | Task 51f87394-dc57-4f84-9b67-61d3b6ced042 is in state STARTED 2026-04-02 00:48:33.415435 | orchestrator | 2026-04-02 00:48:33 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:48:36.455846 | orchestrator | 2026-04-02 00:48:36 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:48:36.459128 | orchestrator | 2026-04-02 00:48:36 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:48:36.460259 | orchestrator | 2026-04-02 00:48:36 | INFO  | Task 806b1cc9-eaca-45b3-9c2c-824127bc5d9d is in state STARTED 2026-04-02 00:48:36.461591 | orchestrator | 2026-04-02 00:48:36 | INFO  | Task 7623ae9a-acfb-4a64-9631-61cbce034143 is in state STARTED 2026-04-02 00:48:36.466464 | orchestrator | 2026-04-02 00:48:36 | INFO  | Task 51f87394-dc57-4f84-9b67-61d3b6ced042 is in state STARTED 2026-04-02 00:48:36.466531 | orchestrator | 2026-04-02 00:48:36 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:48:39.497256 | orchestrator | 2026-04-02 00:48:39 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:48:39.497722 | orchestrator | 2026-04-02 00:48:39 | INFO  | Task a9093460-9294-4743-b3e7-6eb163854c65 is in state STARTED 2026-04-02 00:48:39.498217 | orchestrator | 2026-04-02 00:48:39 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:48:39.498896 | orchestrator | 2026-04-02 00:48:39 | INFO  | Task 806b1cc9-eaca-45b3-9c2c-824127bc5d9d is in state STARTED 2026-04-02 00:48:39.502400 | orchestrator | 2026-04-02 00:48:39.502439 | orchestrator | 2026-04-02 00:48:39.502444 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-04-02 00:48:39.502448 | orchestrator | 2026-04-02 00:48:39.502452 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-04-02 00:48:39.502457 | orchestrator | Thursday 02 April 2026 00:44:04 +0000 (0:00:00.289) 0:00:00.289 ******** 2026-04-02 00:48:39.502460 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:48:39.502465 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:48:39.502469 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:48:39.502473 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:48:39.502476 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:48:39.502480 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:48:39.502484 | orchestrator | 2026-04-02 00:48:39.502488 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-04-02 00:48:39.502491 | orchestrator | Thursday 02 April 2026 00:44:04 +0000 (0:00:00.648) 0:00:00.937 ******** 2026-04-02 00:48:39.502495 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:48:39.502499 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:48:39.502528 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:48:39.502532 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:48:39.502535 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:48:39.502539 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:48:39.502546 | orchestrator | 2026-04-02 00:48:39.502554 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-04-02 00:48:39.502562 | orchestrator | Thursday 02 April 2026 00:44:05 +0000 (0:00:00.746) 0:00:01.684 ******** 2026-04-02 00:48:39.502569 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:48:39.502575 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:48:39.502581 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:48:39.502588 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:48:39.502594 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:48:39.502601 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:48:39.502606 | orchestrator | 2026-04-02 00:48:39.502653 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-04-02 00:48:39.502658 | orchestrator | Thursday 02 April 2026 00:44:06 +0000 (0:00:00.508) 0:00:02.192 ******** 2026-04-02 00:48:39.502662 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:48:39.502666 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:48:39.502670 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:48:39.502673 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:48:39.502677 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:48:39.502681 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:48:39.502685 | orchestrator | 2026-04-02 00:48:39.502689 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-04-02 00:48:39.502694 | orchestrator | Thursday 02 April 2026 00:44:08 +0000 (0:00:02.419) 0:00:04.611 ******** 2026-04-02 00:48:39.502702 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:48:39.502711 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:48:39.502718 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:48:39.502724 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:48:39.502731 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:48:39.502738 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:48:39.502745 | orchestrator | 2026-04-02 00:48:39.502751 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-04-02 00:48:39.502759 | orchestrator | Thursday 02 April 2026 00:44:10 +0000 (0:00:02.106) 0:00:06.718 ******** 2026-04-02 00:48:39.502766 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:48:39.502773 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:48:39.502779 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:48:39.502788 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:48:39.502795 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:48:39.502801 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:48:39.502807 | orchestrator | 2026-04-02 00:48:39.502813 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-04-02 00:48:39.502819 | orchestrator | Thursday 02 April 2026 00:44:12 +0000 (0:00:02.121) 0:00:08.839 ******** 2026-04-02 00:48:39.502824 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:48:39.502830 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:48:39.502836 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:48:39.502852 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:48:39.502859 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:48:39.502865 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:48:39.502872 | orchestrator | 2026-04-02 00:48:39.502877 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-04-02 00:48:39.502881 | orchestrator | Thursday 02 April 2026 00:44:14 +0000 (0:00:01.221) 0:00:10.061 ******** 2026-04-02 00:48:39.502885 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:48:39.502889 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:48:39.502892 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:48:39.502896 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:48:39.502900 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:48:39.502910 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:48:39.502913 | orchestrator | 2026-04-02 00:48:39.502917 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-04-02 00:48:39.502921 | orchestrator | Thursday 02 April 2026 00:44:14 +0000 (0:00:00.864) 0:00:10.925 ******** 2026-04-02 00:48:39.502925 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-02 00:48:39.502929 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-02 00:48:39.502933 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:48:39.502937 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-02 00:48:39.502948 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-02 00:48:39.502952 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:48:39.502955 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-02 00:48:39.502959 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-02 00:48:39.502964 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:48:39.502973 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-02 00:48:39.502992 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-02 00:48:39.502999 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:48:39.503004 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-02 00:48:39.503009 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-02 00:48:39.503013 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:48:39.503018 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-02 00:48:39.503022 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-02 00:48:39.503026 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:48:39.503031 | orchestrator | 2026-04-02 00:48:39.503035 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-04-02 00:48:39.503039 | orchestrator | Thursday 02 April 2026 00:44:15 +0000 (0:00:00.985) 0:00:11.911 ******** 2026-04-02 00:48:39.503043 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:48:39.503047 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:48:39.503052 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:48:39.503062 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:48:39.503066 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:48:39.503071 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:48:39.503075 | orchestrator | 2026-04-02 00:48:39.503080 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-04-02 00:48:39.503085 | orchestrator | Thursday 02 April 2026 00:44:17 +0000 (0:00:02.053) 0:00:13.964 ******** 2026-04-02 00:48:39.503090 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:48:39.503095 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:48:39.503099 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:48:39.503104 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:48:39.503108 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:48:39.503112 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:48:39.503117 | orchestrator | 2026-04-02 00:48:39.503122 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-04-02 00:48:39.503126 | orchestrator | Thursday 02 April 2026 00:44:18 +0000 (0:00:00.803) 0:00:14.767 ******** 2026-04-02 00:48:39.503130 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:48:39.503135 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:48:39.503139 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:48:39.503144 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:48:39.503148 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:48:39.503152 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:48:39.503157 | orchestrator | 2026-04-02 00:48:39.503168 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-04-02 00:48:39.503174 | orchestrator | Thursday 02 April 2026 00:44:24 +0000 (0:00:06.071) 0:00:20.839 ******** 2026-04-02 00:48:39.503194 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:48:39.503202 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:48:39.503209 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:48:39.503215 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:48:39.503222 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:48:39.503228 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:48:39.503234 | orchestrator | 2026-04-02 00:48:39.503240 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-04-02 00:48:39.503246 | orchestrator | Thursday 02 April 2026 00:44:26 +0000 (0:00:01.436) 0:00:22.275 ******** 2026-04-02 00:48:39.503253 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:48:39.503259 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:48:39.503266 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:48:39.503273 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:48:39.503279 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:48:39.503286 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:48:39.503292 | orchestrator | 2026-04-02 00:48:39.503299 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-04-02 00:48:39.503307 | orchestrator | Thursday 02 April 2026 00:44:28 +0000 (0:00:01.809) 0:00:24.085 ******** 2026-04-02 00:48:39.503314 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:48:39.503319 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:48:39.503322 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:48:39.503326 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:48:39.503330 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:48:39.503333 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:48:39.503337 | orchestrator | 2026-04-02 00:48:39.503341 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-04-02 00:48:39.503345 | orchestrator | Thursday 02 April 2026 00:44:29 +0000 (0:00:01.143) 0:00:25.228 ******** 2026-04-02 00:48:39.503349 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-04-02 00:48:39.503353 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-04-02 00:48:39.503356 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:48:39.503360 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-04-02 00:48:39.503364 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-04-02 00:48:39.503368 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:48:39.503371 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-04-02 00:48:39.503375 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-04-02 00:48:39.503379 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:48:39.503383 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-04-02 00:48:39.503387 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-04-02 00:48:39.503394 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:48:39.503398 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-04-02 00:48:39.503402 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-04-02 00:48:39.503406 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:48:39.503409 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-04-02 00:48:39.503413 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-04-02 00:48:39.503417 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:48:39.503421 | orchestrator | 2026-04-02 00:48:39.503424 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-04-02 00:48:39.503433 | orchestrator | Thursday 02 April 2026 00:44:29 +0000 (0:00:00.734) 0:00:25.963 ******** 2026-04-02 00:48:39.503437 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:48:39.503441 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:48:39.503445 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:48:39.503453 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:48:39.503457 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:48:39.503460 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:48:39.503464 | orchestrator | 2026-04-02 00:48:39.503468 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-04-02 00:48:39.503472 | orchestrator | Thursday 02 April 2026 00:44:30 +0000 (0:00:00.738) 0:00:26.702 ******** 2026-04-02 00:48:39.503476 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:48:39.503479 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:48:39.503483 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:48:39.503487 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:48:39.503490 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:48:39.503494 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:48:39.503498 | orchestrator | 2026-04-02 00:48:39.503502 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-04-02 00:48:39.503505 | orchestrator | 2026-04-02 00:48:39.503509 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-04-02 00:48:39.503513 | orchestrator | Thursday 02 April 2026 00:44:31 +0000 (0:00:01.162) 0:00:27.864 ******** 2026-04-02 00:48:39.503517 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:48:39.503521 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:48:39.503525 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:48:39.503528 | orchestrator | 2026-04-02 00:48:39.503532 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-04-02 00:48:39.503536 | orchestrator | Thursday 02 April 2026 00:44:32 +0000 (0:00:00.755) 0:00:28.620 ******** 2026-04-02 00:48:39.503540 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:48:39.503544 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:48:39.503547 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:48:39.503551 | orchestrator | 2026-04-02 00:48:39.503555 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-04-02 00:48:39.503559 | orchestrator | Thursday 02 April 2026 00:44:33 +0000 (0:00:01.245) 0:00:29.865 ******** 2026-04-02 00:48:39.503562 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:48:39.503566 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:48:39.503570 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:48:39.503574 | orchestrator | 2026-04-02 00:48:39.503577 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-04-02 00:48:39.503581 | orchestrator | Thursday 02 April 2026 00:44:34 +0000 (0:00:00.873) 0:00:30.739 ******** 2026-04-02 00:48:39.503585 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:48:39.503589 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:48:39.503592 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:48:39.503596 | orchestrator | 2026-04-02 00:48:39.503600 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-04-02 00:48:39.503604 | orchestrator | Thursday 02 April 2026 00:44:35 +0000 (0:00:01.181) 0:00:31.921 ******** 2026-04-02 00:48:39.503608 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:48:39.503611 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:48:39.503615 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:48:39.503619 | orchestrator | 2026-04-02 00:48:39.503623 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-04-02 00:48:39.503626 | orchestrator | Thursday 02 April 2026 00:44:36 +0000 (0:00:00.318) 0:00:32.240 ******** 2026-04-02 00:48:39.503630 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:48:39.503634 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:48:39.503638 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:48:39.503641 | orchestrator | 2026-04-02 00:48:39.503645 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-04-02 00:48:39.503649 | orchestrator | Thursday 02 April 2026 00:44:37 +0000 (0:00:01.137) 0:00:33.377 ******** 2026-04-02 00:48:39.503653 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:48:39.503657 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:48:39.503663 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:48:39.503667 | orchestrator | 2026-04-02 00:48:39.503671 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-04-02 00:48:39.503674 | orchestrator | Thursday 02 April 2026 00:44:39 +0000 (0:00:01.733) 0:00:35.111 ******** 2026-04-02 00:48:39.503678 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 00:48:39.503682 | orchestrator | 2026-04-02 00:48:39.503686 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-04-02 00:48:39.503690 | orchestrator | Thursday 02 April 2026 00:44:39 +0000 (0:00:00.757) 0:00:35.868 ******** 2026-04-02 00:48:39.503693 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:48:39.503697 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:48:39.503701 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:48:39.503705 | orchestrator | 2026-04-02 00:48:39.503709 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-04-02 00:48:39.503713 | orchestrator | Thursday 02 April 2026 00:44:41 +0000 (0:00:02.041) 0:00:37.909 ******** 2026-04-02 00:48:39.503720 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:48:39.503726 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:48:39.503732 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:48:39.503739 | orchestrator | 2026-04-02 00:48:39.503746 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-04-02 00:48:39.503752 | orchestrator | Thursday 02 April 2026 00:44:42 +0000 (0:00:00.714) 0:00:38.624 ******** 2026-04-02 00:48:39.503759 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:48:39.503768 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:48:39.503776 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:48:39.503782 | orchestrator | 2026-04-02 00:48:39.503788 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-04-02 00:48:39.503794 | orchestrator | Thursday 02 April 2026 00:44:44 +0000 (0:00:01.600) 0:00:40.224 ******** 2026-04-02 00:48:39.503799 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:48:39.503805 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:48:39.503812 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:48:39.503818 | orchestrator | 2026-04-02 00:48:39.503825 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-04-02 00:48:39.503835 | orchestrator | Thursday 02 April 2026 00:44:46 +0000 (0:00:02.114) 0:00:42.339 ******** 2026-04-02 00:48:39.503841 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:48:39.503847 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:48:39.503851 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:48:39.503855 | orchestrator | 2026-04-02 00:48:39.503859 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-04-02 00:48:39.503862 | orchestrator | Thursday 02 April 2026 00:44:47 +0000 (0:00:00.815) 0:00:43.155 ******** 2026-04-02 00:48:39.503866 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:48:39.503870 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:48:39.503874 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:48:39.503877 | orchestrator | 2026-04-02 00:48:39.503881 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-04-02 00:48:39.503885 | orchestrator | Thursday 02 April 2026 00:44:47 +0000 (0:00:00.491) 0:00:43.646 ******** 2026-04-02 00:48:39.503889 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:48:39.503893 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:48:39.503896 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:48:39.503900 | orchestrator | 2026-04-02 00:48:39.503904 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-04-02 00:48:39.503908 | orchestrator | Thursday 02 April 2026 00:44:49 +0000 (0:00:02.060) 0:00:45.707 ******** 2026-04-02 00:48:39.503911 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:48:39.503915 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:48:39.503919 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:48:39.503923 | orchestrator | 2026-04-02 00:48:39.503927 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-04-02 00:48:39.503934 | orchestrator | Thursday 02 April 2026 00:44:52 +0000 (0:00:02.384) 0:00:48.091 ******** 2026-04-02 00:48:39.503938 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:48:39.503942 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:48:39.503945 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:48:39.503949 | orchestrator | 2026-04-02 00:48:39.503953 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-04-02 00:48:39.503957 | orchestrator | Thursday 02 April 2026 00:44:52 +0000 (0:00:00.316) 0:00:48.408 ******** 2026-04-02 00:48:39.503961 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-02 00:48:39.503967 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-02 00:48:39.503973 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-02 00:48:39.503980 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-02 00:48:39.503986 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-02 00:48:39.503993 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-02 00:48:39.503999 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-04-02 00:48:39.504005 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-04-02 00:48:39.504012 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-04-02 00:48:39.504018 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-04-02 00:48:39.504024 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-04-02 00:48:39.504027 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-04-02 00:48:39.504031 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:48:39.504035 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:48:39.504039 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:48:39.504042 | orchestrator | 2026-04-02 00:48:39.504046 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-04-02 00:48:39.504050 | orchestrator | Thursday 02 April 2026 00:45:35 +0000 (0:00:43.078) 0:01:31.486 ******** 2026-04-02 00:48:39.504054 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:48:39.504058 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:48:39.504061 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:48:39.504065 | orchestrator | 2026-04-02 00:48:39.504069 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-04-02 00:48:39.504075 | orchestrator | Thursday 02 April 2026 00:45:35 +0000 (0:00:00.375) 0:01:31.861 ******** 2026-04-02 00:48:39.504079 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:48:39.504083 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:48:39.504087 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:48:39.504090 | orchestrator | 2026-04-02 00:48:39.504094 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-04-02 00:48:39.504100 | orchestrator | Thursday 02 April 2026 00:45:36 +0000 (0:00:00.953) 0:01:32.815 ******** 2026-04-02 00:48:39.504106 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:48:39.504118 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:48:39.504127 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:48:39.504133 | orchestrator | 2026-04-02 00:48:39.504142 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-04-02 00:48:39.504148 | orchestrator | Thursday 02 April 2026 00:45:38 +0000 (0:00:01.288) 0:01:34.104 ******** 2026-04-02 00:48:39.504154 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:48:39.504160 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:48:39.504166 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:48:39.504172 | orchestrator | 2026-04-02 00:48:39.504178 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-04-02 00:48:39.504196 | orchestrator | Thursday 02 April 2026 00:46:17 +0000 (0:00:39.059) 0:02:13.163 ******** 2026-04-02 00:48:39.504203 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:48:39.504209 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:48:39.504215 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:48:39.504221 | orchestrator | 2026-04-02 00:48:39.504227 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-04-02 00:48:39.504234 | orchestrator | Thursday 02 April 2026 00:46:17 +0000 (0:00:00.514) 0:02:13.677 ******** 2026-04-02 00:48:39.504240 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:48:39.504246 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:48:39.504252 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:48:39.504258 | orchestrator | 2026-04-02 00:48:39.504264 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-04-02 00:48:39.504270 | orchestrator | Thursday 02 April 2026 00:46:18 +0000 (0:00:00.731) 0:02:14.408 ******** 2026-04-02 00:48:39.504277 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:48:39.504283 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:48:39.504290 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:48:39.504296 | orchestrator | 2026-04-02 00:48:39.504303 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-04-02 00:48:39.504309 | orchestrator | Thursday 02 April 2026 00:46:18 +0000 (0:00:00.511) 0:02:14.920 ******** 2026-04-02 00:48:39.504316 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:48:39.504321 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:48:39.504325 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:48:39.504330 | orchestrator | 2026-04-02 00:48:39.504337 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-04-02 00:48:39.504343 | orchestrator | Thursday 02 April 2026 00:46:19 +0000 (0:00:00.562) 0:02:15.483 ******** 2026-04-02 00:48:39.504350 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:48:39.504357 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:48:39.504363 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:48:39.504366 | orchestrator | 2026-04-02 00:48:39.504370 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-04-02 00:48:39.504374 | orchestrator | Thursday 02 April 2026 00:46:19 +0000 (0:00:00.286) 0:02:15.770 ******** 2026-04-02 00:48:39.504378 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:48:39.504382 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:48:39.504385 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:48:39.504389 | orchestrator | 2026-04-02 00:48:39.504393 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-04-02 00:48:39.504397 | orchestrator | Thursday 02 April 2026 00:46:20 +0000 (0:00:00.717) 0:02:16.487 ******** 2026-04-02 00:48:39.504401 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:48:39.504405 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:48:39.504409 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:48:39.504413 | orchestrator | 2026-04-02 00:48:39.504416 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-04-02 00:48:39.504420 | orchestrator | Thursday 02 April 2026 00:46:21 +0000 (0:00:00.638) 0:02:17.126 ******** 2026-04-02 00:48:39.504424 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:48:39.504427 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:48:39.504433 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:48:39.504445 | orchestrator | 2026-04-02 00:48:39.504452 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-04-02 00:48:39.504458 | orchestrator | Thursday 02 April 2026 00:46:21 +0000 (0:00:00.833) 0:02:17.960 ******** 2026-04-02 00:48:39.504464 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:48:39.504471 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:48:39.504477 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:48:39.504482 | orchestrator | 2026-04-02 00:48:39.504489 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-04-02 00:48:39.504495 | orchestrator | Thursday 02 April 2026 00:46:22 +0000 (0:00:00.817) 0:02:18.778 ******** 2026-04-02 00:48:39.504502 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:48:39.504509 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:48:39.504516 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:48:39.504522 | orchestrator | 2026-04-02 00:48:39.504528 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-04-02 00:48:39.504532 | orchestrator | Thursday 02 April 2026 00:46:23 +0000 (0:00:00.391) 0:02:19.169 ******** 2026-04-02 00:48:39.504536 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:48:39.504541 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:48:39.504547 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:48:39.504555 | orchestrator | 2026-04-02 00:48:39.504559 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-04-02 00:48:39.504566 | orchestrator | Thursday 02 April 2026 00:46:23 +0000 (0:00:00.241) 0:02:19.411 ******** 2026-04-02 00:48:39.504573 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:48:39.504581 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:48:39.504586 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:48:39.504590 | orchestrator | 2026-04-02 00:48:39.504594 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-04-02 00:48:39.504601 | orchestrator | Thursday 02 April 2026 00:46:23 +0000 (0:00:00.615) 0:02:20.027 ******** 2026-04-02 00:48:39.504621 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:48:39.504629 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:48:39.504637 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:48:39.504643 | orchestrator | 2026-04-02 00:48:39.504650 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-04-02 00:48:39.504657 | orchestrator | Thursday 02 April 2026 00:46:24 +0000 (0:00:00.634) 0:02:20.661 ******** 2026-04-02 00:48:39.504663 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-02 00:48:39.504676 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-02 00:48:39.504682 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-02 00:48:39.504689 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-02 00:48:39.504695 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-02 00:48:39.504702 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-02 00:48:39.504709 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-02 00:48:39.504715 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-02 00:48:39.504722 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-04-02 00:48:39.504727 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-02 00:48:39.504731 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-02 00:48:39.504735 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-04-02 00:48:39.504750 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-02 00:48:39.504755 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-02 00:48:39.504761 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-02 00:48:39.504768 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-02 00:48:39.504774 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-02 00:48:39.504781 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-02 00:48:39.504788 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-02 00:48:39.504792 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-02 00:48:39.504795 | orchestrator | 2026-04-02 00:48:39.504799 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-04-02 00:48:39.504803 | orchestrator | 2026-04-02 00:48:39.504807 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-04-02 00:48:39.504812 | orchestrator | Thursday 02 April 2026 00:46:28 +0000 (0:00:03.566) 0:02:24.227 ******** 2026-04-02 00:48:39.504818 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:48:39.504825 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:48:39.504831 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:48:39.504838 | orchestrator | 2026-04-02 00:48:39.504842 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-04-02 00:48:39.504845 | orchestrator | Thursday 02 April 2026 00:46:28 +0000 (0:00:00.260) 0:02:24.488 ******** 2026-04-02 00:48:39.504849 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:48:39.504853 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:48:39.504857 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:48:39.504861 | orchestrator | 2026-04-02 00:48:39.504864 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-04-02 00:48:39.504868 | orchestrator | Thursday 02 April 2026 00:46:29 +0000 (0:00:00.661) 0:02:25.150 ******** 2026-04-02 00:48:39.504872 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:48:39.504876 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:48:39.504879 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:48:39.504883 | orchestrator | 2026-04-02 00:48:39.504887 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-04-02 00:48:39.504891 | orchestrator | Thursday 02 April 2026 00:46:29 +0000 (0:00:00.405) 0:02:25.556 ******** 2026-04-02 00:48:39.504895 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-02 00:48:39.504899 | orchestrator | 2026-04-02 00:48:39.504902 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-04-02 00:48:39.504906 | orchestrator | Thursday 02 April 2026 00:46:30 +0000 (0:00:00.591) 0:02:26.147 ******** 2026-04-02 00:48:39.504910 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:48:39.504914 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:48:39.504920 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:48:39.504927 | orchestrator | 2026-04-02 00:48:39.504932 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-04-02 00:48:39.504936 | orchestrator | Thursday 02 April 2026 00:46:30 +0000 (0:00:00.377) 0:02:26.524 ******** 2026-04-02 00:48:39.504940 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:48:39.504943 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:48:39.504947 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:48:39.504951 | orchestrator | 2026-04-02 00:48:39.504955 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-04-02 00:48:39.504961 | orchestrator | Thursday 02 April 2026 00:46:30 +0000 (0:00:00.430) 0:02:26.955 ******** 2026-04-02 00:48:39.504965 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:48:39.504969 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:48:39.504975 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:48:39.504980 | orchestrator | 2026-04-02 00:48:39.504986 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-04-02 00:48:39.504992 | orchestrator | Thursday 02 April 2026 00:46:31 +0000 (0:00:00.305) 0:02:27.260 ******** 2026-04-02 00:48:39.504997 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:48:39.505003 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:48:39.505011 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:48:39.505019 | orchestrator | 2026-04-02 00:48:39.505029 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-04-02 00:48:39.505036 | orchestrator | Thursday 02 April 2026 00:46:31 +0000 (0:00:00.699) 0:02:27.959 ******** 2026-04-02 00:48:39.505042 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:48:39.505048 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:48:39.505053 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:48:39.505059 | orchestrator | 2026-04-02 00:48:39.505066 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-04-02 00:48:39.505072 | orchestrator | Thursday 02 April 2026 00:46:33 +0000 (0:00:01.303) 0:02:29.263 ******** 2026-04-02 00:48:39.505079 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:48:39.505086 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:48:39.505090 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:48:39.505094 | orchestrator | 2026-04-02 00:48:39.505098 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-04-02 00:48:39.505102 | orchestrator | Thursday 02 April 2026 00:46:34 +0000 (0:00:01.652) 0:02:30.916 ******** 2026-04-02 00:48:39.505106 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:48:39.505110 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:48:39.505114 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:48:39.505118 | orchestrator | 2026-04-02 00:48:39.505121 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-04-02 00:48:39.505125 | orchestrator | 2026-04-02 00:48:39.505129 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-04-02 00:48:39.505133 | orchestrator | Thursday 02 April 2026 00:46:44 +0000 (0:00:09.688) 0:02:40.604 ******** 2026-04-02 00:48:39.505136 | orchestrator | ok: [testbed-manager] 2026-04-02 00:48:39.505140 | orchestrator | 2026-04-02 00:48:39.505144 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-04-02 00:48:39.505148 | orchestrator | Thursday 02 April 2026 00:46:45 +0000 (0:00:00.856) 0:02:41.461 ******** 2026-04-02 00:48:39.505152 | orchestrator | changed: [testbed-manager] 2026-04-02 00:48:39.505156 | orchestrator | 2026-04-02 00:48:39.505160 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-04-02 00:48:39.505164 | orchestrator | Thursday 02 April 2026 00:46:45 +0000 (0:00:00.339) 0:02:41.800 ******** 2026-04-02 00:48:39.505168 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-04-02 00:48:39.505171 | orchestrator | 2026-04-02 00:48:39.505175 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-04-02 00:48:39.505179 | orchestrator | Thursday 02 April 2026 00:46:46 +0000 (0:00:00.490) 0:02:42.291 ******** 2026-04-02 00:48:39.505250 | orchestrator | changed: [testbed-manager] 2026-04-02 00:48:39.505256 | orchestrator | 2026-04-02 00:48:39.505259 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-04-02 00:48:39.505263 | orchestrator | Thursday 02 April 2026 00:46:47 +0000 (0:00:00.786) 0:02:43.077 ******** 2026-04-02 00:48:39.505267 | orchestrator | changed: [testbed-manager] 2026-04-02 00:48:39.505271 | orchestrator | 2026-04-02 00:48:39.505275 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-04-02 00:48:39.505279 | orchestrator | Thursday 02 April 2026 00:46:47 +0000 (0:00:00.581) 0:02:43.658 ******** 2026-04-02 00:48:39.505283 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-02 00:48:39.505286 | orchestrator | 2026-04-02 00:48:39.505290 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-04-02 00:48:39.505299 | orchestrator | Thursday 02 April 2026 00:46:49 +0000 (0:00:01.633) 0:02:45.292 ******** 2026-04-02 00:48:39.505303 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-02 00:48:39.505307 | orchestrator | 2026-04-02 00:48:39.505311 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-04-02 00:48:39.505315 | orchestrator | Thursday 02 April 2026 00:46:50 +0000 (0:00:00.833) 0:02:46.126 ******** 2026-04-02 00:48:39.505319 | orchestrator | changed: [testbed-manager] 2026-04-02 00:48:39.505322 | orchestrator | 2026-04-02 00:48:39.505326 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-04-02 00:48:39.505330 | orchestrator | Thursday 02 April 2026 00:46:50 +0000 (0:00:00.369) 0:02:46.496 ******** 2026-04-02 00:48:39.505334 | orchestrator | changed: [testbed-manager] 2026-04-02 00:48:39.505338 | orchestrator | 2026-04-02 00:48:39.505342 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-04-02 00:48:39.505346 | orchestrator | 2026-04-02 00:48:39.505349 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-04-02 00:48:39.505353 | orchestrator | Thursday 02 April 2026 00:46:50 +0000 (0:00:00.422) 0:02:46.919 ******** 2026-04-02 00:48:39.505357 | orchestrator | ok: [testbed-manager] 2026-04-02 00:48:39.505361 | orchestrator | 2026-04-02 00:48:39.505365 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-04-02 00:48:39.505369 | orchestrator | Thursday 02 April 2026 00:46:51 +0000 (0:00:00.134) 0:02:47.053 ******** 2026-04-02 00:48:39.505373 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-04-02 00:48:39.505377 | orchestrator | 2026-04-02 00:48:39.505380 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-04-02 00:48:39.505384 | orchestrator | Thursday 02 April 2026 00:46:51 +0000 (0:00:00.228) 0:02:47.281 ******** 2026-04-02 00:48:39.505388 | orchestrator | ok: [testbed-manager] 2026-04-02 00:48:39.505392 | orchestrator | 2026-04-02 00:48:39.505396 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-04-02 00:48:39.505403 | orchestrator | Thursday 02 April 2026 00:46:52 +0000 (0:00:01.151) 0:02:48.432 ******** 2026-04-02 00:48:39.505407 | orchestrator | ok: [testbed-manager] 2026-04-02 00:48:39.505411 | orchestrator | 2026-04-02 00:48:39.505415 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-04-02 00:48:39.505418 | orchestrator | Thursday 02 April 2026 00:46:53 +0000 (0:00:01.298) 0:02:49.731 ******** 2026-04-02 00:48:39.505422 | orchestrator | changed: [testbed-manager] 2026-04-02 00:48:39.505426 | orchestrator | 2026-04-02 00:48:39.505430 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-04-02 00:48:39.505434 | orchestrator | Thursday 02 April 2026 00:46:54 +0000 (0:00:00.971) 0:02:50.702 ******** 2026-04-02 00:48:39.505438 | orchestrator | ok: [testbed-manager] 2026-04-02 00:48:39.505442 | orchestrator | 2026-04-02 00:48:39.505450 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-04-02 00:48:39.505453 | orchestrator | Thursday 02 April 2026 00:46:55 +0000 (0:00:00.565) 0:02:51.267 ******** 2026-04-02 00:48:39.505457 | orchestrator | changed: [testbed-manager] 2026-04-02 00:48:39.505461 | orchestrator | 2026-04-02 00:48:39.505465 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-04-02 00:48:39.505469 | orchestrator | Thursday 02 April 2026 00:47:02 +0000 (0:00:07.260) 0:02:58.527 ******** 2026-04-02 00:48:39.505473 | orchestrator | changed: [testbed-manager] 2026-04-02 00:48:39.505477 | orchestrator | 2026-04-02 00:48:39.505481 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-04-02 00:48:39.505487 | orchestrator | Thursday 02 April 2026 00:47:17 +0000 (0:00:15.446) 0:03:13.974 ******** 2026-04-02 00:48:39.505494 | orchestrator | ok: [testbed-manager] 2026-04-02 00:48:39.505501 | orchestrator | 2026-04-02 00:48:39.505507 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-04-02 00:48:39.505511 | orchestrator | 2026-04-02 00:48:39.505515 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-04-02 00:48:39.505521 | orchestrator | Thursday 02 April 2026 00:47:18 +0000 (0:00:00.558) 0:03:14.532 ******** 2026-04-02 00:48:39.505525 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:48:39.505529 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:48:39.505533 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:48:39.505537 | orchestrator | 2026-04-02 00:48:39.505541 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-04-02 00:48:39.505547 | orchestrator | Thursday 02 April 2026 00:47:18 +0000 (0:00:00.436) 0:03:14.969 ******** 2026-04-02 00:48:39.505556 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:48:39.505564 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:48:39.505569 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:48:39.505576 | orchestrator | 2026-04-02 00:48:39.505581 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-04-02 00:48:39.505586 | orchestrator | Thursday 02 April 2026 00:47:19 +0000 (0:00:00.274) 0:03:15.244 ******** 2026-04-02 00:48:39.505592 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 00:48:39.505599 | orchestrator | 2026-04-02 00:48:39.505605 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-04-02 00:48:39.505612 | orchestrator | Thursday 02 April 2026 00:47:19 +0000 (0:00:00.388) 0:03:15.633 ******** 2026-04-02 00:48:39.505618 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-02 00:48:39.505625 | orchestrator | 2026-04-02 00:48:39.505631 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-04-02 00:48:39.505635 | orchestrator | Thursday 02 April 2026 00:47:20 +0000 (0:00:00.996) 0:03:16.629 ******** 2026-04-02 00:48:39.505638 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-02 00:48:39.505642 | orchestrator | 2026-04-02 00:48:39.505646 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-04-02 00:48:39.505650 | orchestrator | Thursday 02 April 2026 00:47:21 +0000 (0:00:00.696) 0:03:17.326 ******** 2026-04-02 00:48:39.505654 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:48:39.505657 | orchestrator | 2026-04-02 00:48:39.505661 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-04-02 00:48:39.505665 | orchestrator | Thursday 02 April 2026 00:47:21 +0000 (0:00:00.197) 0:03:17.524 ******** 2026-04-02 00:48:39.505669 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-02 00:48:39.505673 | orchestrator | 2026-04-02 00:48:39.505676 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-04-02 00:48:39.505680 | orchestrator | Thursday 02 April 2026 00:47:22 +0000 (0:00:00.791) 0:03:18.315 ******** 2026-04-02 00:48:39.505684 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:48:39.505688 | orchestrator | 2026-04-02 00:48:39.505692 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-04-02 00:48:39.505696 | orchestrator | Thursday 02 April 2026 00:47:22 +0000 (0:00:00.114) 0:03:18.430 ******** 2026-04-02 00:48:39.505699 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:48:39.505703 | orchestrator | 2026-04-02 00:48:39.505707 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-04-02 00:48:39.505710 | orchestrator | Thursday 02 April 2026 00:47:22 +0000 (0:00:00.124) 0:03:18.554 ******** 2026-04-02 00:48:39.505714 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:48:39.505718 | orchestrator | 2026-04-02 00:48:39.505722 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-04-02 00:48:39.505726 | orchestrator | Thursday 02 April 2026 00:47:22 +0000 (0:00:00.109) 0:03:18.664 ******** 2026-04-02 00:48:39.505730 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:48:39.505734 | orchestrator | 2026-04-02 00:48:39.505738 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-04-02 00:48:39.505742 | orchestrator | Thursday 02 April 2026 00:47:22 +0000 (0:00:00.107) 0:03:18.771 ******** 2026-04-02 00:48:39.505746 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-02 00:48:39.505754 | orchestrator | 2026-04-02 00:48:39.505758 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-04-02 00:48:39.505762 | orchestrator | Thursday 02 April 2026 00:47:27 +0000 (0:00:05.240) 0:03:24.012 ******** 2026-04-02 00:48:39.505766 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-04-02 00:48:39.505772 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-04-02 00:48:39.505776 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-04-02 00:48:39.505780 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-04-02 00:48:39.505784 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-04-02 00:48:39.505788 | orchestrator | 2026-04-02 00:48:39.505792 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-04-02 00:48:39.505795 | orchestrator | Thursday 02 April 2026 00:48:10 +0000 (0:00:42.462) 0:04:06.474 ******** 2026-04-02 00:48:39.505804 | orchestrator | ok: [2026-04-02 00:48:39 | INFO  | Task 7623ae9a-acfb-4a64-9631-61cbce034143 is in state SUCCESS 2026-04-02 00:48:39.505811 | orchestrator | 2026-04-02 00:48:39 | INFO  | Task 6b67c2a0-3162-4668-8aec-5cdeb57f48b1 is in state STARTED 2026-04-02 00:48:39.505966 | orchestrator | testbed-node-0 -> localhost] 2026-04-02 00:48:39.505980 | orchestrator | 2026-04-02 00:48:39.505987 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-04-02 00:48:39.505993 | orchestrator | Thursday 02 April 2026 00:48:11 +0000 (0:00:01.230) 0:04:07.705 ******** 2026-04-02 00:48:39.505999 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-02 00:48:39.506006 | orchestrator | 2026-04-02 00:48:39.506049 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-04-02 00:48:39.506059 | orchestrator | Thursday 02 April 2026 00:48:14 +0000 (0:00:02.606) 0:04:10.312 ******** 2026-04-02 00:48:39.506066 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-02 00:48:39.506072 | orchestrator | 2026-04-02 00:48:39.506078 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-04-02 00:48:39.506084 | orchestrator | Thursday 02 April 2026 00:48:15 +0000 (0:00:01.355) 0:04:11.668 ******** 2026-04-02 00:48:39.506090 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:48:39.506096 | orchestrator | 2026-04-02 00:48:39.506102 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-04-02 00:48:39.506108 | orchestrator | Thursday 02 April 2026 00:48:15 +0000 (0:00:00.205) 0:04:11.873 ******** 2026-04-02 00:48:39.506114 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-04-02 00:48:39.506121 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-04-02 00:48:39.506127 | orchestrator | 2026-04-02 00:48:39.506133 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-04-02 00:48:39.506139 | orchestrator | Thursday 02 April 2026 00:48:17 +0000 (0:00:01.866) 0:04:13.739 ******** 2026-04-02 00:48:39.506145 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:48:39.506151 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:48:39.506157 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:48:39.506162 | orchestrator | 2026-04-02 00:48:39.506168 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-04-02 00:48:39.506173 | orchestrator | Thursday 02 April 2026 00:48:18 +0000 (0:00:00.314) 0:04:14.054 ******** 2026-04-02 00:48:39.506179 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:48:39.506198 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:48:39.506204 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:48:39.506211 | orchestrator | 2026-04-02 00:48:39.506217 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-04-02 00:48:39.506223 | orchestrator | 2026-04-02 00:48:39.506229 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-04-02 00:48:39.506243 | orchestrator | Thursday 02 April 2026 00:48:18 +0000 (0:00:00.951) 0:04:15.005 ******** 2026-04-02 00:48:39.506249 | orchestrator | ok: [testbed-manager] 2026-04-02 00:48:39.506256 | orchestrator | 2026-04-02 00:48:39.506263 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-04-02 00:48:39.506269 | orchestrator | Thursday 02 April 2026 00:48:19 +0000 (0:00:00.130) 0:04:15.136 ******** 2026-04-02 00:48:39.506275 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-04-02 00:48:39.506281 | orchestrator | 2026-04-02 00:48:39.506288 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-04-02 00:48:39.506293 | orchestrator | Thursday 02 April 2026 00:48:19 +0000 (0:00:00.295) 0:04:15.432 ******** 2026-04-02 00:48:39.506297 | orchestrator | changed: [testbed-manager] 2026-04-02 00:48:39.506301 | orchestrator | 2026-04-02 00:48:39.506305 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-04-02 00:48:39.506309 | orchestrator | 2026-04-02 00:48:39.506313 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-04-02 00:48:39.506316 | orchestrator | Thursday 02 April 2026 00:48:24 +0000 (0:00:04.822) 0:04:20.254 ******** 2026-04-02 00:48:39.506320 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:48:39.506324 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:48:39.506328 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:48:39.506332 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:48:39.506336 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:48:39.506340 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:48:39.506343 | orchestrator | 2026-04-02 00:48:39.506347 | orchestrator | TASK [Manage labels] *********************************************************** 2026-04-02 00:48:39.506351 | orchestrator | Thursday 02 April 2026 00:48:24 +0000 (0:00:00.498) 0:04:20.753 ******** 2026-04-02 00:48:39.506355 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-02 00:48:39.506359 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-02 00:48:39.506362 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-02 00:48:39.506366 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-02 00:48:39.506370 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-02 00:48:39.506377 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-02 00:48:39.506381 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-02 00:48:39.506385 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-02 00:48:39.506389 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-02 00:48:39.506393 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-02 00:48:39.506396 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-02 00:48:39.506400 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-02 00:48:39.506411 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-02 00:48:39.506415 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-02 00:48:39.506419 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-02 00:48:39.506423 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-02 00:48:39.506426 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-02 00:48:39.506430 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-02 00:48:39.506434 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-02 00:48:39.506441 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-02 00:48:39.506445 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-02 00:48:39.506449 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-02 00:48:39.506453 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-02 00:48:39.506456 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-02 00:48:39.506460 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-02 00:48:39.506464 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-02 00:48:39.506467 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-02 00:48:39.506471 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-02 00:48:39.506475 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-02 00:48:39.506479 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-02 00:48:39.506482 | orchestrator | 2026-04-02 00:48:39.506486 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-04-02 00:48:39.506490 | orchestrator | Thursday 02 April 2026 00:48:36 +0000 (0:00:11.659) 0:04:32.413 ******** 2026-04-02 00:48:39.506494 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:48:39.506497 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:48:39.506501 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:48:39.506505 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:48:39.506509 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:48:39.506512 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:48:39.506516 | orchestrator | 2026-04-02 00:48:39.506520 | orchestrator | TASK [Manage taints] *********************************************************** 2026-04-02 00:48:39.506524 | orchestrator | Thursday 02 April 2026 00:48:36 +0000 (0:00:00.405) 0:04:32.818 ******** 2026-04-02 00:48:39.506527 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:48:39.506531 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:48:39.506535 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:48:39.506539 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:48:39.506542 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:48:39.506546 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:48:39.506550 | orchestrator | 2026-04-02 00:48:39.506553 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-02 00:48:39.506557 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-02 00:48:39.506562 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-04-02 00:48:39.506567 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-02 00:48:39.506570 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-02 00:48:39.506574 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-02 00:48:39.506578 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-02 00:48:39.506582 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-02 00:48:39.506590 | orchestrator | 2026-04-02 00:48:39.506595 | orchestrator | 2026-04-02 00:48:39.506599 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-02 00:48:39.506604 | orchestrator | Thursday 02 April 2026 00:48:37 +0000 (0:00:00.439) 0:04:33.258 ******** 2026-04-02 00:48:39.506608 | orchestrator | =============================================================================== 2026-04-02 00:48:39.506981 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 43.08s 2026-04-02 00:48:39.507004 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 42.46s 2026-04-02 00:48:39.507010 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 39.06s 2026-04-02 00:48:39.507025 | orchestrator | kubectl : Install required packages ------------------------------------ 15.45s 2026-04-02 00:48:39.507029 | orchestrator | Manage labels ---------------------------------------------------------- 11.66s 2026-04-02 00:48:39.507033 | orchestrator | k3s_agent : Manage k3s service ------------------------------------------ 9.69s 2026-04-02 00:48:39.507038 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 7.26s 2026-04-02 00:48:39.507044 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 6.07s 2026-04-02 00:48:39.507054 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 5.24s 2026-04-02 00:48:39.507060 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 4.82s 2026-04-02 00:48:39.507066 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.57s 2026-04-02 00:48:39.507073 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 2.61s 2026-04-02 00:48:39.507079 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.42s 2026-04-02 00:48:39.507087 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 2.38s 2026-04-02 00:48:39.507093 | orchestrator | k3s_prereq : Enable IPv6 router advertisements -------------------------- 2.12s 2026-04-02 00:48:39.507106 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 2.11s 2026-04-02 00:48:39.507112 | orchestrator | k3s_prereq : Enable IPv6 forwarding ------------------------------------- 2.11s 2026-04-02 00:48:39.507118 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 2.06s 2026-04-02 00:48:39.507124 | orchestrator | k3s_prereq : Add /usr/local/bin to sudo secure_path --------------------- 2.05s 2026-04-02 00:48:39.507130 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.04s 2026-04-02 00:48:39.507137 | orchestrator | 2026-04-02 00:48:39 | INFO  | Task 51f87394-dc57-4f84-9b67-61d3b6ced042 is in state STARTED 2026-04-02 00:48:39.507143 | orchestrator | 2026-04-02 00:48:39 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:48:42.598644 | orchestrator | 2026-04-02 00:48:42 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:48:42.598708 | orchestrator | 2026-04-02 00:48:42 | INFO  | Task a9093460-9294-4743-b3e7-6eb163854c65 is in state STARTED 2026-04-02 00:48:42.598719 | orchestrator | 2026-04-02 00:48:42 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:48:42.598727 | orchestrator | 2026-04-02 00:48:42 | INFO  | Task 806b1cc9-eaca-45b3-9c2c-824127bc5d9d is in state STARTED 2026-04-02 00:48:42.598734 | orchestrator | 2026-04-02 00:48:42 | INFO  | Task 6b67c2a0-3162-4668-8aec-5cdeb57f48b1 is in state STARTED 2026-04-02 00:48:42.598742 | orchestrator | 2026-04-02 00:48:42 | INFO  | Task 51f87394-dc57-4f84-9b67-61d3b6ced042 is in state STARTED 2026-04-02 00:48:42.598750 | orchestrator | 2026-04-02 00:48:42 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:48:45.567267 | orchestrator | 2026-04-02 00:48:45 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:48:45.567647 | orchestrator | 2026-04-02 00:48:45 | INFO  | Task a9093460-9294-4743-b3e7-6eb163854c65 is in state STARTED 2026-04-02 00:48:45.568265 | orchestrator | 2026-04-02 00:48:45 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:48:45.568824 | orchestrator | 2026-04-02 00:48:45 | INFO  | Task 806b1cc9-eaca-45b3-9c2c-824127bc5d9d is in state STARTED 2026-04-02 00:48:45.569367 | orchestrator | 2026-04-02 00:48:45 | INFO  | Task 6b67c2a0-3162-4668-8aec-5cdeb57f48b1 is in state SUCCESS 2026-04-02 00:48:45.570689 | orchestrator | 2026-04-02 00:48:45 | INFO  | Task 51f87394-dc57-4f84-9b67-61d3b6ced042 is in state STARTED 2026-04-02 00:48:45.570777 | orchestrator | 2026-04-02 00:48:45 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:48:48.620753 | orchestrator | 2026-04-02 00:48:48 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:48:48.620817 | orchestrator | 2026-04-02 00:48:48 | INFO  | Task a9093460-9294-4743-b3e7-6eb163854c65 is in state SUCCESS 2026-04-02 00:48:48.620827 | orchestrator | 2026-04-02 00:48:48 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:48:48.620835 | orchestrator | 2026-04-02 00:48:48 | INFO  | Task 806b1cc9-eaca-45b3-9c2c-824127bc5d9d is in state STARTED 2026-04-02 00:48:48.621366 | orchestrator | 2026-04-02 00:48:48 | INFO  | Task 51f87394-dc57-4f84-9b67-61d3b6ced042 is in state STARTED 2026-04-02 00:48:48.621433 | orchestrator | 2026-04-02 00:48:48 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:48:51.644533 | orchestrator | 2026-04-02 00:48:51 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:48:51.645194 | orchestrator | 2026-04-02 00:48:51 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:48:51.647045 | orchestrator | 2026-04-02 00:48:51 | INFO  | Task 806b1cc9-eaca-45b3-9c2c-824127bc5d9d is in state STARTED 2026-04-02 00:48:51.647747 | orchestrator | 2026-04-02 00:48:51 | INFO  | Task 51f87394-dc57-4f84-9b67-61d3b6ced042 is in state STARTED 2026-04-02 00:48:51.647908 | orchestrator | 2026-04-02 00:48:51 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:48:54.690651 | orchestrator | 2026-04-02 00:48:54 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:48:54.692398 | orchestrator | 2026-04-02 00:48:54 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:48:54.694145 | orchestrator | 2026-04-02 00:48:54 | INFO  | Task 806b1cc9-eaca-45b3-9c2c-824127bc5d9d is in state STARTED 2026-04-02 00:48:54.696805 | orchestrator | 2026-04-02 00:48:54 | INFO  | Task 51f87394-dc57-4f84-9b67-61d3b6ced042 is in state STARTED 2026-04-02 00:48:54.696870 | orchestrator | 2026-04-02 00:48:54 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:48:57.742723 | orchestrator | 2026-04-02 00:48:57 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:48:57.744864 | orchestrator | 2026-04-02 00:48:57 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:48:57.747384 | orchestrator | 2026-04-02 00:48:57 | INFO  | Task 806b1cc9-eaca-45b3-9c2c-824127bc5d9d is in state STARTED 2026-04-02 00:48:57.750454 | orchestrator | 2026-04-02 00:48:57 | INFO  | Task 51f87394-dc57-4f84-9b67-61d3b6ced042 is in state STARTED 2026-04-02 00:48:57.750744 | orchestrator | 2026-04-02 00:48:57 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:49:00.803044 | orchestrator | 2026-04-02 00:49:00 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:49:00.805795 | orchestrator | 2026-04-02 00:49:00 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:49:00.806832 | orchestrator | 2026-04-02 00:49:00 | INFO  | Task 806b1cc9-eaca-45b3-9c2c-824127bc5d9d is in state SUCCESS 2026-04-02 00:49:00.808207 | orchestrator | 2026-04-02 00:49:00.808238 | orchestrator | 2026-04-02 00:49:00.808247 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-04-02 00:49:00.808257 | orchestrator | 2026-04-02 00:49:00.808266 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-04-02 00:49:00.808275 | orchestrator | Thursday 02 April 2026 00:48:40 +0000 (0:00:00.245) 0:00:00.245 ******** 2026-04-02 00:49:00.808283 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-04-02 00:49:00.808290 | orchestrator | 2026-04-02 00:49:00.808298 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-04-02 00:49:00.808305 | orchestrator | Thursday 02 April 2026 00:48:41 +0000 (0:00:01.033) 0:00:01.279 ******** 2026-04-02 00:49:00.808313 | orchestrator | changed: [testbed-manager] 2026-04-02 00:49:00.808320 | orchestrator | 2026-04-02 00:49:00.808328 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-04-02 00:49:00.808335 | orchestrator | Thursday 02 April 2026 00:48:44 +0000 (0:00:02.513) 0:00:03.792 ******** 2026-04-02 00:49:00.808342 | orchestrator | changed: [testbed-manager] 2026-04-02 00:49:00.808349 | orchestrator | 2026-04-02 00:49:00.808356 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-02 00:49:00.808364 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-02 00:49:00.808372 | orchestrator | 2026-04-02 00:49:00.808379 | orchestrator | 2026-04-02 00:49:00.808386 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-02 00:49:00.808394 | orchestrator | Thursday 02 April 2026 00:48:44 +0000 (0:00:00.379) 0:00:04.171 ******** 2026-04-02 00:49:00.808401 | orchestrator | =============================================================================== 2026-04-02 00:49:00.808408 | orchestrator | Write kubeconfig file --------------------------------------------------- 2.51s 2026-04-02 00:49:00.808415 | orchestrator | Get kubeconfig file ----------------------------------------------------- 1.03s 2026-04-02 00:49:00.808422 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.38s 2026-04-02 00:49:00.808430 | orchestrator | 2026-04-02 00:49:00.808437 | orchestrator | 2026-04-02 00:49:00.808444 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-04-02 00:49:00.808451 | orchestrator | 2026-04-02 00:49:00.808458 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-04-02 00:49:00.808465 | orchestrator | Thursday 02 April 2026 00:48:39 +0000 (0:00:00.198) 0:00:00.198 ******** 2026-04-02 00:49:00.808472 | orchestrator | ok: [testbed-manager] 2026-04-02 00:49:00.808480 | orchestrator | 2026-04-02 00:49:00.808487 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-04-02 00:49:00.808495 | orchestrator | Thursday 02 April 2026 00:48:40 +0000 (0:00:00.757) 0:00:00.956 ******** 2026-04-02 00:49:00.808502 | orchestrator | ok: [testbed-manager] 2026-04-02 00:49:00.808509 | orchestrator | 2026-04-02 00:49:00.808516 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-04-02 00:49:00.808523 | orchestrator | Thursday 02 April 2026 00:48:41 +0000 (0:00:00.590) 0:00:01.546 ******** 2026-04-02 00:49:00.808530 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-04-02 00:49:00.808538 | orchestrator | 2026-04-02 00:49:00.808555 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-04-02 00:49:00.808563 | orchestrator | Thursday 02 April 2026 00:48:42 +0000 (0:00:00.899) 0:00:02.445 ******** 2026-04-02 00:49:00.808570 | orchestrator | changed: [testbed-manager] 2026-04-02 00:49:00.808577 | orchestrator | 2026-04-02 00:49:00.808584 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-04-02 00:49:00.808592 | orchestrator | Thursday 02 April 2026 00:48:44 +0000 (0:00:02.230) 0:00:04.676 ******** 2026-04-02 00:49:00.808615 | orchestrator | changed: [testbed-manager] 2026-04-02 00:49:00.808622 | orchestrator | 2026-04-02 00:49:00.808629 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-04-02 00:49:00.808637 | orchestrator | Thursday 02 April 2026 00:48:44 +0000 (0:00:00.468) 0:00:05.144 ******** 2026-04-02 00:49:00.808644 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-02 00:49:00.808651 | orchestrator | 2026-04-02 00:49:00.808658 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-04-02 00:49:00.808666 | orchestrator | Thursday 02 April 2026 00:48:46 +0000 (0:00:01.545) 0:00:06.690 ******** 2026-04-02 00:49:00.808673 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-02 00:49:00.808680 | orchestrator | 2026-04-02 00:49:00.808697 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-04-02 00:49:00.808705 | orchestrator | Thursday 02 April 2026 00:48:47 +0000 (0:00:00.772) 0:00:07.462 ******** 2026-04-02 00:49:00.808712 | orchestrator | ok: [testbed-manager] 2026-04-02 00:49:00.808719 | orchestrator | 2026-04-02 00:49:00.808727 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-04-02 00:49:00.808734 | orchestrator | Thursday 02 April 2026 00:48:47 +0000 (0:00:00.378) 0:00:07.841 ******** 2026-04-02 00:49:00.808741 | orchestrator | ok: [testbed-manager] 2026-04-02 00:49:00.808749 | orchestrator | 2026-04-02 00:49:00.808756 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-02 00:49:00.808763 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-02 00:49:00.808770 | orchestrator | 2026-04-02 00:49:00.808778 | orchestrator | 2026-04-02 00:49:00.808785 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-02 00:49:00.808792 | orchestrator | Thursday 02 April 2026 00:48:47 +0000 (0:00:00.261) 0:00:08.102 ******** 2026-04-02 00:49:00.808799 | orchestrator | =============================================================================== 2026-04-02 00:49:00.808807 | orchestrator | Write kubeconfig file --------------------------------------------------- 2.23s 2026-04-02 00:49:00.808814 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.55s 2026-04-02 00:49:00.808821 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.90s 2026-04-02 00:49:00.808837 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.77s 2026-04-02 00:49:00.808845 | orchestrator | Get home directory of operator user ------------------------------------- 0.76s 2026-04-02 00:49:00.808852 | orchestrator | Create .kube directory -------------------------------------------------- 0.59s 2026-04-02 00:49:00.808934 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.47s 2026-04-02 00:49:00.808943 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.38s 2026-04-02 00:49:00.808950 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.26s 2026-04-02 00:49:00.808957 | orchestrator | 2026-04-02 00:49:00.808965 | orchestrator | 2026-04-02 00:49:00.808972 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2026-04-02 00:49:00.808979 | orchestrator | 2026-04-02 00:49:00.809034 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-04-02 00:49:00.809044 | orchestrator | Thursday 02 April 2026 00:46:44 +0000 (0:00:00.094) 0:00:00.094 ******** 2026-04-02 00:49:00.809052 | orchestrator | ok: [localhost] => { 2026-04-02 00:49:00.809060 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2026-04-02 00:49:00.809067 | orchestrator | } 2026-04-02 00:49:00.809074 | orchestrator | 2026-04-02 00:49:00.809081 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2026-04-02 00:49:00.809089 | orchestrator | Thursday 02 April 2026 00:46:44 +0000 (0:00:00.041) 0:00:00.135 ******** 2026-04-02 00:49:00.809097 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2026-04-02 00:49:00.809111 | orchestrator | ...ignoring 2026-04-02 00:49:00.809118 | orchestrator | 2026-04-02 00:49:00.809126 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2026-04-02 00:49:00.809133 | orchestrator | Thursday 02 April 2026 00:46:47 +0000 (0:00:03.029) 0:00:03.164 ******** 2026-04-02 00:49:00.809140 | orchestrator | skipping: [localhost] 2026-04-02 00:49:00.809147 | orchestrator | 2026-04-02 00:49:00.809191 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2026-04-02 00:49:00.809200 | orchestrator | Thursday 02 April 2026 00:46:47 +0000 (0:00:00.048) 0:00:03.213 ******** 2026-04-02 00:49:00.809207 | orchestrator | ok: [localhost] 2026-04-02 00:49:00.809214 | orchestrator | 2026-04-02 00:49:00.809221 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-02 00:49:00.809228 | orchestrator | 2026-04-02 00:49:00.809235 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-02 00:49:00.809242 | orchestrator | Thursday 02 April 2026 00:46:47 +0000 (0:00:00.220) 0:00:03.433 ******** 2026-04-02 00:49:00.809250 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:49:00.809257 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:49:00.809264 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:49:00.809272 | orchestrator | 2026-04-02 00:49:00.809279 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-02 00:49:00.809286 | orchestrator | Thursday 02 April 2026 00:46:47 +0000 (0:00:00.349) 0:00:03.783 ******** 2026-04-02 00:49:00.809293 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-04-02 00:49:00.809301 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-04-02 00:49:00.809308 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-04-02 00:49:00.809315 | orchestrator | 2026-04-02 00:49:00.809322 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-04-02 00:49:00.809329 | orchestrator | 2026-04-02 00:49:00.809336 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-02 00:49:00.809344 | orchestrator | Thursday 02 April 2026 00:46:48 +0000 (0:00:00.505) 0:00:04.289 ******** 2026-04-02 00:49:00.809351 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 00:49:00.809358 | orchestrator | 2026-04-02 00:49:00.809366 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-04-02 00:49:00.809373 | orchestrator | Thursday 02 April 2026 00:46:49 +0000 (0:00:00.789) 0:00:05.078 ******** 2026-04-02 00:49:00.809380 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:49:00.809387 | orchestrator | 2026-04-02 00:49:00.809395 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-04-02 00:49:00.809415 | orchestrator | Thursday 02 April 2026 00:46:50 +0000 (0:00:01.260) 0:00:06.339 ******** 2026-04-02 00:49:00.809423 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:49:00.809430 | orchestrator | 2026-04-02 00:49:00.809437 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-04-02 00:49:00.809445 | orchestrator | Thursday 02 April 2026 00:46:50 +0000 (0:00:00.327) 0:00:06.667 ******** 2026-04-02 00:49:00.809452 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:49:00.809459 | orchestrator | 2026-04-02 00:49:00.809466 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-04-02 00:49:00.809473 | orchestrator | Thursday 02 April 2026 00:46:51 +0000 (0:00:00.431) 0:00:07.098 ******** 2026-04-02 00:49:00.809481 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:49:00.809488 | orchestrator | 2026-04-02 00:49:00.809495 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-04-02 00:49:00.809502 | orchestrator | Thursday 02 April 2026 00:46:51 +0000 (0:00:00.482) 0:00:07.581 ******** 2026-04-02 00:49:00.809509 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:49:00.809517 | orchestrator | 2026-04-02 00:49:00.809524 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-02 00:49:00.809536 | orchestrator | Thursday 02 April 2026 00:46:52 +0000 (0:00:00.444) 0:00:08.025 ******** 2026-04-02 00:49:00.809543 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 00:49:00.809550 | orchestrator | 2026-04-02 00:49:00.809558 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-04-02 00:49:00.809571 | orchestrator | Thursday 02 April 2026 00:46:54 +0000 (0:00:01.784) 0:00:09.809 ******** 2026-04-02 00:49:00.809579 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:49:00.809586 | orchestrator | 2026-04-02 00:49:00.809593 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-04-02 00:49:00.809601 | orchestrator | Thursday 02 April 2026 00:46:55 +0000 (0:00:01.268) 0:00:11.078 ******** 2026-04-02 00:49:00.809608 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:49:00.809615 | orchestrator | 2026-04-02 00:49:00.809623 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-04-02 00:49:00.809630 | orchestrator | Thursday 02 April 2026 00:46:56 +0000 (0:00:01.113) 0:00:12.191 ******** 2026-04-02 00:49:00.809637 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:49:00.809644 | orchestrator | 2026-04-02 00:49:00.809651 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-04-02 00:49:00.809659 | orchestrator | Thursday 02 April 2026 00:46:57 +0000 (0:00:00.630) 0:00:12.822 ******** 2026-04-02 00:49:00.809670 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-02 00:49:00.809682 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-02 00:49:00.809696 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-02 00:49:00.809717 | orchestrator | 2026-04-02 00:49:00.809730 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-04-02 00:49:00.809750 | orchestrator | Thursday 02 April 2026 00:46:58 +0000 (0:00:01.145) 0:00:13.967 ******** 2026-04-02 00:49:00.809774 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-02 00:49:00.809789 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-02 00:49:00.809809 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-02 00:49:00.809830 | orchestrator | 2026-04-02 00:49:00.809844 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-04-02 00:49:00.809856 | orchestrator | Thursday 02 April 2026 00:46:59 +0000 (0:00:01.751) 0:00:15.718 ******** 2026-04-02 00:49:00.809869 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-02 00:49:00.809882 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-02 00:49:00.809896 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-02 00:49:00.809909 | orchestrator | 2026-04-02 00:49:00.809921 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-04-02 00:49:00.809934 | orchestrator | Thursday 02 April 2026 00:47:02 +0000 (0:00:02.250) 0:00:17.969 ******** 2026-04-02 00:49:00.809947 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-02 00:49:00.809960 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-02 00:49:00.809973 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-02 00:49:00.809986 | orchestrator | 2026-04-02 00:49:00.809998 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-04-02 00:49:00.810059 | orchestrator | Thursday 02 April 2026 00:47:06 +0000 (0:00:04.156) 0:00:22.125 ******** 2026-04-02 00:49:00.810077 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-02 00:49:00.810090 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-02 00:49:00.810102 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-02 00:49:00.810119 | orchestrator | 2026-04-02 00:49:00.810135 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-04-02 00:49:00.810147 | orchestrator | Thursday 02 April 2026 00:47:08 +0000 (0:00:02.294) 0:00:24.420 ******** 2026-04-02 00:49:00.810177 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-02 00:49:00.810189 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-02 00:49:00.810201 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-02 00:49:00.810213 | orchestrator | 2026-04-02 00:49:00.810224 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-04-02 00:49:00.810236 | orchestrator | Thursday 02 April 2026 00:47:10 +0000 (0:00:02.068) 0:00:26.488 ******** 2026-04-02 00:49:00.810247 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-02 00:49:00.810261 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-02 00:49:00.810274 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-02 00:49:00.810286 | orchestrator | 2026-04-02 00:49:00.810296 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-04-02 00:49:00.810304 | orchestrator | Thursday 02 April 2026 00:47:12 +0000 (0:00:02.243) 0:00:28.731 ******** 2026-04-02 00:49:00.810311 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-02 00:49:00.810318 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-02 00:49:00.810326 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-02 00:49:00.810333 | orchestrator | 2026-04-02 00:49:00.810340 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-02 00:49:00.810347 | orchestrator | Thursday 02 April 2026 00:47:15 +0000 (0:00:02.338) 0:00:31.069 ******** 2026-04-02 00:49:00.810363 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:49:00.810371 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:49:00.810378 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:49:00.810385 | orchestrator | 2026-04-02 00:49:00.810393 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2026-04-02 00:49:00.810400 | orchestrator | Thursday 02 April 2026 00:47:15 +0000 (0:00:00.479) 0:00:31.549 ******** 2026-04-02 00:49:00.810414 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-02 00:49:00.810431 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-02 00:49:00.810440 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-02 00:49:00.810448 | orchestrator | 2026-04-02 00:49:00.810455 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-04-02 00:49:00.810462 | orchestrator | Thursday 02 April 2026 00:47:17 +0000 (0:00:01.470) 0:00:33.019 ******** 2026-04-02 00:49:00.810470 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:49:00.810482 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:49:00.810489 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:49:00.810496 | orchestrator | 2026-04-02 00:49:00.810504 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-04-02 00:49:00.810511 | orchestrator | Thursday 02 April 2026 00:47:18 +0000 (0:00:01.393) 0:00:34.412 ******** 2026-04-02 00:49:00.810518 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:49:00.810526 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:49:00.810533 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:49:00.810540 | orchestrator | 2026-04-02 00:49:00.810547 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-04-02 00:49:00.810555 | orchestrator | Thursday 02 April 2026 00:47:26 +0000 (0:00:07.918) 0:00:42.330 ******** 2026-04-02 00:49:00.810562 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:49:00.810570 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:49:00.810579 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:49:00.810587 | orchestrator | 2026-04-02 00:49:00.810596 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-04-02 00:49:00.810605 | orchestrator | 2026-04-02 00:49:00.810614 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-04-02 00:49:00.810622 | orchestrator | Thursday 02 April 2026 00:47:27 +0000 (0:00:00.484) 0:00:42.815 ******** 2026-04-02 00:49:00.810631 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:49:00.810640 | orchestrator | 2026-04-02 00:49:00.810649 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-04-02 00:49:00.810658 | orchestrator | Thursday 02 April 2026 00:47:27 +0000 (0:00:00.544) 0:00:43.359 ******** 2026-04-02 00:49:00.810666 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:49:00.810675 | orchestrator | 2026-04-02 00:49:00.810684 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-04-02 00:49:00.810693 | orchestrator | Thursday 02 April 2026 00:47:27 +0000 (0:00:00.217) 0:00:43.576 ******** 2026-04-02 00:49:00.810701 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:49:00.810710 | orchestrator | 2026-04-02 00:49:00.810718 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-04-02 00:49:00.810727 | orchestrator | Thursday 02 April 2026 00:47:29 +0000 (0:00:01.605) 0:00:45.182 ******** 2026-04-02 00:49:00.810736 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:49:00.810745 | orchestrator | 2026-04-02 00:49:00.810753 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-04-02 00:49:00.810762 | orchestrator | 2026-04-02 00:49:00.810775 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-04-02 00:49:00.810784 | orchestrator | Thursday 02 April 2026 00:48:23 +0000 (0:00:54.599) 0:01:39.781 ******** 2026-04-02 00:49:00.810793 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:49:00.810802 | orchestrator | 2026-04-02 00:49:00.810811 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-04-02 00:49:00.810819 | orchestrator | Thursday 02 April 2026 00:48:24 +0000 (0:00:00.583) 0:01:40.364 ******** 2026-04-02 00:49:00.810828 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:49:00.810837 | orchestrator | 2026-04-02 00:49:00.810845 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-04-02 00:49:00.810854 | orchestrator | Thursday 02 April 2026 00:48:24 +0000 (0:00:00.295) 0:01:40.660 ******** 2026-04-02 00:49:00.810863 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:49:00.810872 | orchestrator | 2026-04-02 00:49:00.810881 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-04-02 00:49:00.810904 | orchestrator | Thursday 02 April 2026 00:48:27 +0000 (0:00:02.683) 0:01:43.343 ******** 2026-04-02 00:49:00.810923 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:49:00.810943 | orchestrator | 2026-04-02 00:49:00.810958 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-04-02 00:49:00.810973 | orchestrator | 2026-04-02 00:49:00.810987 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-04-02 00:49:00.811022 | orchestrator | Thursday 02 April 2026 00:48:41 +0000 (0:00:13.938) 0:01:57.282 ******** 2026-04-02 00:49:00.811039 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:49:00.811055 | orchestrator | 2026-04-02 00:49:00.811078 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-04-02 00:49:00.811099 | orchestrator | Thursday 02 April 2026 00:48:42 +0000 (0:00:00.526) 0:01:57.809 ******** 2026-04-02 00:49:00.811120 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:49:00.811135 | orchestrator | 2026-04-02 00:49:00.811151 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-04-02 00:49:00.811199 | orchestrator | Thursday 02 April 2026 00:48:42 +0000 (0:00:00.175) 0:01:57.984 ******** 2026-04-02 00:49:00.811215 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:49:00.811229 | orchestrator | 2026-04-02 00:49:00.811238 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-04-02 00:49:00.811252 | orchestrator | Thursday 02 April 2026 00:48:44 +0000 (0:00:01.853) 0:01:59.837 ******** 2026-04-02 00:49:00.811276 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:49:00.811292 | orchestrator | 2026-04-02 00:49:00.811308 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-04-02 00:49:00.811322 | orchestrator | 2026-04-02 00:49:00.811337 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-04-02 00:49:00.811353 | orchestrator | Thursday 02 April 2026 00:48:57 +0000 (0:00:13.328) 0:02:13.166 ******** 2026-04-02 00:49:00.811368 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 00:49:00.811382 | orchestrator | 2026-04-02 00:49:00.811396 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-04-02 00:49:00.811411 | orchestrator | Thursday 02 April 2026 00:48:57 +0000 (0:00:00.608) 0:02:13.774 ******** 2026-04-02 00:49:00.811426 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:49:00.811442 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:49:00.811452 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:49:00.811461 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-04-02 00:49:00.811470 | orchestrator | enable_outward_rabbitmq_True 2026-04-02 00:49:00.811479 | orchestrator | 2026-04-02 00:49:00.811487 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2026-04-02 00:49:00.811496 | orchestrator | skipping: no hosts matched 2026-04-02 00:49:00.811505 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-04-02 00:49:00.811513 | orchestrator | outward_rabbitmq_restart 2026-04-02 00:49:00.811522 | orchestrator | 2026-04-02 00:49:00.811530 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2026-04-02 00:49:00.811539 | orchestrator | skipping: no hosts matched 2026-04-02 00:49:00.811548 | orchestrator | 2026-04-02 00:49:00.811556 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2026-04-02 00:49:00.811565 | orchestrator | skipping: no hosts matched 2026-04-02 00:49:00.811573 | orchestrator | 2026-04-02 00:49:00.811582 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-02 00:49:00.811591 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-04-02 00:49:00.811601 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-04-02 00:49:00.811610 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-02 00:49:00.811619 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-02 00:49:00.811627 | orchestrator | 2026-04-02 00:49:00.811636 | orchestrator | 2026-04-02 00:49:00.811644 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-02 00:49:00.811653 | orchestrator | Thursday 02 April 2026 00:48:59 +0000 (0:00:01.790) 0:02:15.565 ******** 2026-04-02 00:49:00.811671 | orchestrator | =============================================================================== 2026-04-02 00:49:00.811679 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 81.87s 2026-04-02 00:49:00.811688 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 7.92s 2026-04-02 00:49:00.811697 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 6.14s 2026-04-02 00:49:00.811705 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 4.16s 2026-04-02 00:49:00.811720 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.03s 2026-04-02 00:49:00.811729 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 2.34s 2026-04-02 00:49:00.811737 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 2.29s 2026-04-02 00:49:00.811746 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.25s 2026-04-02 00:49:00.811755 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 2.24s 2026-04-02 00:49:00.811764 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.07s 2026-04-02 00:49:00.811772 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 1.79s 2026-04-02 00:49:00.811781 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.78s 2026-04-02 00:49:00.811790 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.75s 2026-04-02 00:49:00.811798 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.65s 2026-04-02 00:49:00.811807 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.47s 2026-04-02 00:49:00.811815 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 1.39s 2026-04-02 00:49:00.811824 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.27s 2026-04-02 00:49:00.811841 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.26s 2026-04-02 00:49:00.811850 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.15s 2026-04-02 00:49:00.811859 | orchestrator | rabbitmq : List RabbitMQ policies --------------------------------------- 1.11s 2026-04-02 00:49:00.811868 | orchestrator | 2026-04-02 00:49:00 | INFO  | Task 51f87394-dc57-4f84-9b67-61d3b6ced042 is in state STARTED 2026-04-02 00:49:00.811877 | orchestrator | 2026-04-02 00:49:00 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:49:03.860718 | orchestrator | 2026-04-02 00:49:03 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:49:03.861317 | orchestrator | 2026-04-02 00:49:03 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:49:03.862051 | orchestrator | 2026-04-02 00:49:03 | INFO  | Task 51f87394-dc57-4f84-9b67-61d3b6ced042 is in state STARTED 2026-04-02 00:49:03.862071 | orchestrator | 2026-04-02 00:49:03 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:49:06.896420 | orchestrator | 2026-04-02 00:49:06 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:49:06.896541 | orchestrator | 2026-04-02 00:49:06 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:49:06.896564 | orchestrator | 2026-04-02 00:49:06 | INFO  | Task 51f87394-dc57-4f84-9b67-61d3b6ced042 is in state STARTED 2026-04-02 00:49:06.896583 | orchestrator | 2026-04-02 00:49:06 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:49:09.932125 | orchestrator | 2026-04-02 00:49:09 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:49:09.933009 | orchestrator | 2026-04-02 00:49:09 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:49:09.933714 | orchestrator | 2026-04-02 00:49:09 | INFO  | Task 51f87394-dc57-4f84-9b67-61d3b6ced042 is in state STARTED 2026-04-02 00:49:09.933840 | orchestrator | 2026-04-02 00:49:09 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:49:12.964479 | orchestrator | 2026-04-02 00:49:12 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:49:12.967159 | orchestrator | 2026-04-02 00:49:12 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:49:12.969816 | orchestrator | 2026-04-02 00:49:12 | INFO  | Task 51f87394-dc57-4f84-9b67-61d3b6ced042 is in state STARTED 2026-04-02 00:49:12.969894 | orchestrator | 2026-04-02 00:49:12 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:49:15.997869 | orchestrator | 2026-04-02 00:49:15 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:49:15.998679 | orchestrator | 2026-04-02 00:49:15 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:49:16.000080 | orchestrator | 2026-04-02 00:49:16 | INFO  | Task 51f87394-dc57-4f84-9b67-61d3b6ced042 is in state STARTED 2026-04-02 00:49:16.000508 | orchestrator | 2026-04-02 00:49:16 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:49:19.039616 | orchestrator | 2026-04-02 00:49:19 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:49:19.039761 | orchestrator | 2026-04-02 00:49:19 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:49:19.041094 | orchestrator | 2026-04-02 00:49:19 | INFO  | Task 51f87394-dc57-4f84-9b67-61d3b6ced042 is in state STARTED 2026-04-02 00:49:19.041147 | orchestrator | 2026-04-02 00:49:19 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:49:22.098935 | orchestrator | 2026-04-02 00:49:22 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:49:22.101536 | orchestrator | 2026-04-02 00:49:22 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:49:22.103259 | orchestrator | 2026-04-02 00:49:22 | INFO  | Task 51f87394-dc57-4f84-9b67-61d3b6ced042 is in state STARTED 2026-04-02 00:49:22.103313 | orchestrator | 2026-04-02 00:49:22 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:49:25.143027 | orchestrator | 2026-04-02 00:49:25 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:49:25.143831 | orchestrator | 2026-04-02 00:49:25 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:49:25.144535 | orchestrator | 2026-04-02 00:49:25 | INFO  | Task 51f87394-dc57-4f84-9b67-61d3b6ced042 is in state STARTED 2026-04-02 00:49:25.144564 | orchestrator | 2026-04-02 00:49:25 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:49:28.187590 | orchestrator | 2026-04-02 00:49:28 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:49:28.187641 | orchestrator | 2026-04-02 00:49:28 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:49:28.187646 | orchestrator | 2026-04-02 00:49:28 | INFO  | Task 51f87394-dc57-4f84-9b67-61d3b6ced042 is in state STARTED 2026-04-02 00:49:28.187651 | orchestrator | 2026-04-02 00:49:28 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:49:31.227371 | orchestrator | 2026-04-02 00:49:31 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:49:31.229958 | orchestrator | 2026-04-02 00:49:31 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:49:31.231788 | orchestrator | 2026-04-02 00:49:31 | INFO  | Task 51f87394-dc57-4f84-9b67-61d3b6ced042 is in state STARTED 2026-04-02 00:49:31.231845 | orchestrator | 2026-04-02 00:49:31 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:49:34.265865 | orchestrator | 2026-04-02 00:49:34 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:49:34.266066 | orchestrator | 2026-04-02 00:49:34 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:49:34.270630 | orchestrator | 2026-04-02 00:49:34 | INFO  | Task 51f87394-dc57-4f84-9b67-61d3b6ced042 is in state STARTED 2026-04-02 00:49:34.270692 | orchestrator | 2026-04-02 00:49:34 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:49:37.321138 | orchestrator | 2026-04-02 00:49:37 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:49:37.323616 | orchestrator | 2026-04-02 00:49:37 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:49:37.326120 | orchestrator | 2026-04-02 00:49:37 | INFO  | Task 51f87394-dc57-4f84-9b67-61d3b6ced042 is in state STARTED 2026-04-02 00:49:37.326173 | orchestrator | 2026-04-02 00:49:37 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:49:40.363764 | orchestrator | 2026-04-02 00:49:40 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:49:40.366670 | orchestrator | 2026-04-02 00:49:40 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:49:40.370168 | orchestrator | 2026-04-02 00:49:40 | INFO  | Task 51f87394-dc57-4f84-9b67-61d3b6ced042 is in state STARTED 2026-04-02 00:49:40.370453 | orchestrator | 2026-04-02 00:49:40 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:49:43.419383 | orchestrator | 2026-04-02 00:49:43 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:49:43.420849 | orchestrator | 2026-04-02 00:49:43 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:49:43.422150 | orchestrator | 2026-04-02 00:49:43 | INFO  | Task 51f87394-dc57-4f84-9b67-61d3b6ced042 is in state STARTED 2026-04-02 00:49:43.422439 | orchestrator | 2026-04-02 00:49:43 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:49:46.462445 | orchestrator | 2026-04-02 00:49:46 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:49:46.465048 | orchestrator | 2026-04-02 00:49:46 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:49:46.467941 | orchestrator | 2026-04-02 00:49:46 | INFO  | Task 51f87394-dc57-4f84-9b67-61d3b6ced042 is in state STARTED 2026-04-02 00:49:46.468004 | orchestrator | 2026-04-02 00:49:46 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:49:49.492584 | orchestrator | 2026-04-02 00:49:49 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:49:49.493283 | orchestrator | 2026-04-02 00:49:49 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:49:49.494116 | orchestrator | 2026-04-02 00:49:49 | INFO  | Task 51f87394-dc57-4f84-9b67-61d3b6ced042 is in state STARTED 2026-04-02 00:49:49.494168 | orchestrator | 2026-04-02 00:49:49 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:49:52.533229 | orchestrator | 2026-04-02 00:49:52 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:49:52.533868 | orchestrator | 2026-04-02 00:49:52 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:49:52.536816 | orchestrator | 2026-04-02 00:49:52 | INFO  | Task 51f87394-dc57-4f84-9b67-61d3b6ced042 is in state SUCCESS 2026-04-02 00:49:52.538336 | orchestrator | 2026-04-02 00:49:52.538392 | orchestrator | 2026-04-02 00:49:52.538399 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-02 00:49:52.538405 | orchestrator | 2026-04-02 00:49:52.538408 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-02 00:49:52.538413 | orchestrator | Thursday 02 April 2026 00:47:32 +0000 (0:00:00.171) 0:00:00.172 ******** 2026-04-02 00:49:52.538417 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:49:52.538422 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:49:52.538426 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:49:52.538430 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:49:52.538433 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:49:52.538437 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:49:52.538441 | orchestrator | 2026-04-02 00:49:52.538445 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-02 00:49:52.538449 | orchestrator | Thursday 02 April 2026 00:47:33 +0000 (0:00:00.607) 0:00:00.779 ******** 2026-04-02 00:49:52.538453 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-04-02 00:49:52.538458 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-04-02 00:49:52.538461 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-04-02 00:49:52.538465 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-04-02 00:49:52.538469 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-04-02 00:49:52.538472 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-04-02 00:49:52.538476 | orchestrator | 2026-04-02 00:49:52.538480 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-04-02 00:49:52.538484 | orchestrator | 2026-04-02 00:49:52.538488 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-04-02 00:49:52.538492 | orchestrator | Thursday 02 April 2026 00:47:34 +0000 (0:00:00.958) 0:00:01.738 ******** 2026-04-02 00:49:52.538504 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 00:49:52.538510 | orchestrator | 2026-04-02 00:49:52.538513 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-04-02 00:49:52.538517 | orchestrator | Thursday 02 April 2026 00:47:35 +0000 (0:00:01.135) 0:00:02.873 ******** 2026-04-02 00:49:52.538523 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:49:52.538534 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:49:52.538538 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:49:52.538552 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:49:52.538560 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:49:52.538573 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:49:52.538577 | orchestrator | 2026-04-02 00:49:52.538594 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-04-02 00:49:52.538598 | orchestrator | Thursday 02 April 2026 00:47:37 +0000 (0:00:02.009) 0:00:04.883 ******** 2026-04-02 00:49:52.538602 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:49:52.538606 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:49:52.538610 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:49:52.538614 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:49:52.538618 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:49:52.538622 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:49:52.538626 | orchestrator | 2026-04-02 00:49:52.538630 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-04-02 00:49:52.538649 | orchestrator | Thursday 02 April 2026 00:47:39 +0000 (0:00:01.608) 0:00:06.491 ******** 2026-04-02 00:49:52.538656 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:49:52.538660 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:49:52.538667 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:49:52.538671 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:49:52.538675 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:49:52.538679 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:49:52.538683 | orchestrator | 2026-04-02 00:49:52.538686 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-04-02 00:49:52.538690 | orchestrator | Thursday 02 April 2026 00:47:40 +0000 (0:00:01.140) 0:00:07.632 ******** 2026-04-02 00:49:52.538738 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:49:52.538744 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:49:52.538750 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:49:52.538807 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:49:52.538817 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:49:52.538823 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:49:52.538829 | orchestrator | 2026-04-02 00:49:52.538838 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2026-04-02 00:49:52.538889 | orchestrator | Thursday 02 April 2026 00:47:41 +0000 (0:00:01.581) 0:00:09.214 ******** 2026-04-02 00:49:52.538896 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:49:52.538963 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:49:52.538980 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:49:52.538985 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:49:52.538990 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:49:52.539000 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:49:52.539005 | orchestrator | 2026-04-02 00:49:52.539009 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-04-02 00:49:52.539014 | orchestrator | Thursday 02 April 2026 00:47:43 +0000 (0:00:01.410) 0:00:10.624 ******** 2026-04-02 00:49:52.539019 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:49:52.539023 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:49:52.539028 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:49:52.539032 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:49:52.539036 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:49:52.539041 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:49:52.539045 | orchestrator | 2026-04-02 00:49:52.539052 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-04-02 00:49:52.539057 | orchestrator | Thursday 02 April 2026 00:47:45 +0000 (0:00:02.195) 0:00:12.820 ******** 2026-04-02 00:49:52.539062 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-04-02 00:49:52.539067 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-04-02 00:49:52.539071 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-04-02 00:49:52.539076 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-04-02 00:49:52.539080 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-04-02 00:49:52.539083 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-04-02 00:49:52.539087 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-02 00:49:52.539091 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-02 00:49:52.539099 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-02 00:49:52.539103 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-02 00:49:52.539107 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-02 00:49:52.539111 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-02 00:49:52.539115 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-02 00:49:52.539120 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-02 00:49:52.539124 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-02 00:49:52.539128 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-02 00:49:52.539132 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-02 00:49:52.539135 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-02 00:49:52.539143 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-02 00:49:52.539147 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-02 00:49:52.539151 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-02 00:49:52.539155 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-02 00:49:52.539159 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-02 00:49:52.539162 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-02 00:49:52.539166 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-02 00:49:52.539170 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-02 00:49:52.539174 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-02 00:49:52.539177 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-02 00:49:52.539181 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-02 00:49:52.539185 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-02 00:49:52.539189 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-02 00:49:52.539193 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-02 00:49:52.539196 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-02 00:49:52.539200 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-02 00:49:52.539220 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-02 00:49:52.539224 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-02 00:49:52.539227 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-04-02 00:49:52.539234 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-04-02 00:49:52.539409 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-04-02 00:49:52.539416 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-04-02 00:49:52.539420 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-04-02 00:49:52.539424 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-04-02 00:49:52.539428 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-04-02 00:49:52.539432 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-04-02 00:49:52.539439 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-04-02 00:49:52.539444 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-04-02 00:49:52.539447 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-04-02 00:49:52.539456 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-04-02 00:49:52.539460 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-04-02 00:49:52.539464 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-04-02 00:49:52.539468 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-04-02 00:49:52.539472 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-04-02 00:49:52.539476 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-04-02 00:49:52.539479 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-04-02 00:49:52.539483 | orchestrator | 2026-04-02 00:49:52.539487 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-02 00:49:52.539491 | orchestrator | Thursday 02 April 2026 00:48:06 +0000 (0:00:20.633) 0:00:33.453 ******** 2026-04-02 00:49:52.539495 | orchestrator | 2026-04-02 00:49:52.539499 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-02 00:49:52.539502 | orchestrator | Thursday 02 April 2026 00:48:06 +0000 (0:00:00.132) 0:00:33.585 ******** 2026-04-02 00:49:52.539506 | orchestrator | 2026-04-02 00:49:52.539510 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-02 00:49:52.539514 | orchestrator | Thursday 02 April 2026 00:48:06 +0000 (0:00:00.132) 0:00:33.718 ******** 2026-04-02 00:49:52.539517 | orchestrator | 2026-04-02 00:49:52.539521 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-02 00:49:52.539525 | orchestrator | Thursday 02 April 2026 00:48:06 +0000 (0:00:00.082) 0:00:33.800 ******** 2026-04-02 00:49:52.539529 | orchestrator | 2026-04-02 00:49:52.539532 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-02 00:49:52.539536 | orchestrator | Thursday 02 April 2026 00:48:06 +0000 (0:00:00.060) 0:00:33.861 ******** 2026-04-02 00:49:52.539540 | orchestrator | 2026-04-02 00:49:52.539544 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-02 00:49:52.539547 | orchestrator | Thursday 02 April 2026 00:48:06 +0000 (0:00:00.068) 0:00:33.930 ******** 2026-04-02 00:49:52.539551 | orchestrator | 2026-04-02 00:49:52.539555 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-04-02 00:49:52.539559 | orchestrator | Thursday 02 April 2026 00:48:06 +0000 (0:00:00.061) 0:00:33.991 ******** 2026-04-02 00:49:52.539563 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:49:52.539567 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:49:52.539570 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:49:52.539574 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:49:52.539578 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:49:52.539581 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:49:52.539585 | orchestrator | 2026-04-02 00:49:52.539589 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-04-02 00:49:52.539593 | orchestrator | Thursday 02 April 2026 00:48:08 +0000 (0:00:01.942) 0:00:35.933 ******** 2026-04-02 00:49:52.539597 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:49:52.539600 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:49:52.539604 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:49:52.539608 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:49:52.539612 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:49:52.539615 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:49:52.539619 | orchestrator | 2026-04-02 00:49:52.539623 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-04-02 00:49:52.539627 | orchestrator | 2026-04-02 00:49:52.539631 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-04-02 00:49:52.539641 | orchestrator | Thursday 02 April 2026 00:48:36 +0000 (0:00:27.612) 0:01:03.546 ******** 2026-04-02 00:49:52.539645 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 00:49:52.539649 | orchestrator | 2026-04-02 00:49:52.539653 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-04-02 00:49:52.539656 | orchestrator | Thursday 02 April 2026 00:48:36 +0000 (0:00:00.427) 0:01:03.973 ******** 2026-04-02 00:49:52.539660 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 00:49:52.539664 | orchestrator | 2026-04-02 00:49:52.539668 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-04-02 00:49:52.539672 | orchestrator | Thursday 02 April 2026 00:48:37 +0000 (0:00:00.606) 0:01:04.580 ******** 2026-04-02 00:49:52.539675 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:49:52.539679 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:49:52.539683 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:49:52.539699 | orchestrator | 2026-04-02 00:49:52.539703 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-04-02 00:49:52.539706 | orchestrator | Thursday 02 April 2026 00:48:37 +0000 (0:00:00.686) 0:01:05.267 ******** 2026-04-02 00:49:52.539710 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:49:52.539714 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:49:52.539718 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:49:52.539724 | orchestrator | 2026-04-02 00:49:52.539728 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-04-02 00:49:52.539732 | orchestrator | Thursday 02 April 2026 00:48:38 +0000 (0:00:00.332) 0:01:05.600 ******** 2026-04-02 00:49:52.539736 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:49:52.539739 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:49:52.539743 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:49:52.539747 | orchestrator | 2026-04-02 00:49:52.539751 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-04-02 00:49:52.539754 | orchestrator | Thursday 02 April 2026 00:48:38 +0000 (0:00:00.436) 0:01:06.037 ******** 2026-04-02 00:49:52.539758 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:49:52.539762 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:49:52.539765 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:49:52.539769 | orchestrator | 2026-04-02 00:49:52.539773 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-04-02 00:49:52.539777 | orchestrator | Thursday 02 April 2026 00:48:38 +0000 (0:00:00.291) 0:01:06.328 ******** 2026-04-02 00:49:52.539780 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:49:52.539784 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:49:52.539788 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:49:52.539792 | orchestrator | 2026-04-02 00:49:52.539796 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-04-02 00:49:52.539799 | orchestrator | Thursday 02 April 2026 00:48:39 +0000 (0:00:00.287) 0:01:06.616 ******** 2026-04-02 00:49:52.539803 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:49:52.539807 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:49:52.539811 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:49:52.539814 | orchestrator | 2026-04-02 00:49:52.539818 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-04-02 00:49:52.539822 | orchestrator | Thursday 02 April 2026 00:48:39 +0000 (0:00:00.243) 0:01:06.860 ******** 2026-04-02 00:49:52.539826 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:49:52.539830 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:49:52.539833 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:49:52.539837 | orchestrator | 2026-04-02 00:49:52.539841 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-04-02 00:49:52.539845 | orchestrator | Thursday 02 April 2026 00:48:39 +0000 (0:00:00.271) 0:01:07.132 ******** 2026-04-02 00:49:52.539848 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:49:52.539852 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:49:52.539859 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:49:52.539863 | orchestrator | 2026-04-02 00:49:52.539867 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-04-02 00:49:52.539871 | orchestrator | Thursday 02 April 2026 00:48:40 +0000 (0:00:00.396) 0:01:07.528 ******** 2026-04-02 00:49:52.539874 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:49:52.539878 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:49:52.539882 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:49:52.539886 | orchestrator | 2026-04-02 00:49:52.539889 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-04-02 00:49:52.539893 | orchestrator | Thursday 02 April 2026 00:48:40 +0000 (0:00:00.248) 0:01:07.776 ******** 2026-04-02 00:49:52.539897 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:49:52.539901 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:49:52.539904 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:49:52.539908 | orchestrator | 2026-04-02 00:49:52.539912 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-04-02 00:49:52.539916 | orchestrator | Thursday 02 April 2026 00:48:40 +0000 (0:00:00.214) 0:01:07.991 ******** 2026-04-02 00:49:52.539920 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:49:52.539923 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:49:52.539927 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:49:52.539931 | orchestrator | 2026-04-02 00:49:52.539935 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-04-02 00:49:52.539938 | orchestrator | Thursday 02 April 2026 00:48:40 +0000 (0:00:00.215) 0:01:08.206 ******** 2026-04-02 00:49:52.539942 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:49:52.539946 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:49:52.539950 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:49:52.539953 | orchestrator | 2026-04-02 00:49:52.539957 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-04-02 00:49:52.539961 | orchestrator | Thursday 02 April 2026 00:48:41 +0000 (0:00:00.360) 0:01:08.566 ******** 2026-04-02 00:49:52.539965 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:49:52.539969 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:49:52.539972 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:49:52.539976 | orchestrator | 2026-04-02 00:49:52.539980 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-04-02 00:49:52.539986 | orchestrator | Thursday 02 April 2026 00:48:41 +0000 (0:00:00.251) 0:01:08.818 ******** 2026-04-02 00:49:52.539990 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:49:52.539994 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:49:52.539998 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:49:52.540002 | orchestrator | 2026-04-02 00:49:52.540005 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-04-02 00:49:52.540009 | orchestrator | Thursday 02 April 2026 00:48:41 +0000 (0:00:00.300) 0:01:09.118 ******** 2026-04-02 00:49:52.540013 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:49:52.540017 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:49:52.540021 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:49:52.540024 | orchestrator | 2026-04-02 00:49:52.540028 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-04-02 00:49:52.540032 | orchestrator | Thursday 02 April 2026 00:48:41 +0000 (0:00:00.230) 0:01:09.349 ******** 2026-04-02 00:49:52.540037 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:49:52.540041 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:49:52.540046 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:49:52.540050 | orchestrator | 2026-04-02 00:49:52.540055 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-04-02 00:49:52.540060 | orchestrator | Thursday 02 April 2026 00:48:42 +0000 (0:00:00.655) 0:01:10.005 ******** 2026-04-02 00:49:52.540064 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:49:52.540069 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:49:52.540079 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:49:52.540083 | orchestrator | 2026-04-02 00:49:52.540087 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-04-02 00:49:52.540092 | orchestrator | Thursday 02 April 2026 00:48:42 +0000 (0:00:00.311) 0:01:10.316 ******** 2026-04-02 00:49:52.540096 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 00:49:52.540100 | orchestrator | 2026-04-02 00:49:52.540105 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-04-02 00:49:52.540109 | orchestrator | Thursday 02 April 2026 00:48:43 +0000 (0:00:00.604) 0:01:10.920 ******** 2026-04-02 00:49:52.540114 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:49:52.540118 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:49:52.540123 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:49:52.540127 | orchestrator | 2026-04-02 00:49:52.540131 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-04-02 00:49:52.540136 | orchestrator | Thursday 02 April 2026 00:48:44 +0000 (0:00:00.642) 0:01:11.563 ******** 2026-04-02 00:49:52.540140 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:49:52.540145 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:49:52.540149 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:49:52.540153 | orchestrator | 2026-04-02 00:49:52.540158 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-04-02 00:49:52.540162 | orchestrator | Thursday 02 April 2026 00:48:44 +0000 (0:00:00.429) 0:01:11.992 ******** 2026-04-02 00:49:52.540167 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:49:52.540171 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:49:52.540175 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:49:52.540180 | orchestrator | 2026-04-02 00:49:52.540184 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-04-02 00:49:52.540189 | orchestrator | Thursday 02 April 2026 00:48:44 +0000 (0:00:00.273) 0:01:12.266 ******** 2026-04-02 00:49:52.540193 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:49:52.540197 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:49:52.540243 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:49:52.540248 | orchestrator | 2026-04-02 00:49:52.540253 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-04-02 00:49:52.540258 | orchestrator | Thursday 02 April 2026 00:48:45 +0000 (0:00:00.281) 0:01:12.547 ******** 2026-04-02 00:49:52.540262 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:49:52.540266 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:49:52.540271 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:49:52.540275 | orchestrator | 2026-04-02 00:49:52.540280 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-04-02 00:49:52.540284 | orchestrator | Thursday 02 April 2026 00:48:45 +0000 (0:00:00.349) 0:01:12.897 ******** 2026-04-02 00:49:52.540289 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:49:52.540293 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:49:52.540297 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:49:52.540302 | orchestrator | 2026-04-02 00:49:52.540306 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-04-02 00:49:52.540310 | orchestrator | Thursday 02 April 2026 00:48:45 +0000 (0:00:00.265) 0:01:13.162 ******** 2026-04-02 00:49:52.540315 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:49:52.540319 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:49:52.540323 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:49:52.540328 | orchestrator | 2026-04-02 00:49:52.540332 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-04-02 00:49:52.540336 | orchestrator | Thursday 02 April 2026 00:48:45 +0000 (0:00:00.231) 0:01:13.394 ******** 2026-04-02 00:49:52.540341 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:49:52.540345 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:49:52.540350 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:49:52.540354 | orchestrator | 2026-04-02 00:49:52.540362 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-04-02 00:49:52.540366 | orchestrator | Thursday 02 April 2026 00:48:46 +0000 (0:00:00.256) 0:01:13.651 ******** 2026-04-02 00:49:52.540372 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:49:52.540386 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:49:52.540391 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:49:52.540398 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:49:52.540404 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:49:52.540409 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:49:52.540414 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:49:52.540419 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:49:52.540423 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:49:52.540427 | orchestrator | 2026-04-02 00:49:52.540431 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-04-02 00:49:52.540441 | orchestrator | Thursday 02 April 2026 00:48:47 +0000 (0:00:01.246) 0:01:14.898 ******** 2026-04-02 00:49:52.540445 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:49:52.540449 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:49:52.540457 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:49:52.540461 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:49:52.540467 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:49:52.540471 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:49:52.540475 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:49:52.540479 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:49:52.540483 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:49:52.540487 | orchestrator | 2026-04-02 00:49:52.540491 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-04-02 00:49:52.540499 | orchestrator | Thursday 02 April 2026 00:48:51 +0000 (0:00:03.830) 0:01:18.728 ******** 2026-04-02 00:49:52.540503 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:49:52.540506 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:49:52.540510 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:49:52.540517 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:49:52.540521 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:49:52.540527 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:49:52.540531 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:49:52.540535 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:49:52.540539 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:49:52.540543 | orchestrator | 2026-04-02 00:49:52.540547 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-02 00:49:52.540551 | orchestrator | Thursday 02 April 2026 00:48:53 +0000 (0:00:02.398) 0:01:21.127 ******** 2026-04-02 00:49:52.540558 | orchestrator | 2026-04-02 00:49:52.540562 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-02 00:49:52.540566 | orchestrator | Thursday 02 April 2026 00:48:53 +0000 (0:00:00.071) 0:01:21.199 ******** 2026-04-02 00:49:52.540569 | orchestrator | 2026-04-02 00:49:52.540573 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-02 00:49:52.540578 | orchestrator | Thursday 02 April 2026 00:48:53 +0000 (0:00:00.064) 0:01:21.263 ******** 2026-04-02 00:49:52.540581 | orchestrator | 2026-04-02 00:49:52.540585 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-04-02 00:49:52.540589 | orchestrator | Thursday 02 April 2026 00:48:53 +0000 (0:00:00.079) 0:01:21.343 ******** 2026-04-02 00:49:52.540593 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:49:52.540597 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:49:52.540600 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:49:52.540604 | orchestrator | 2026-04-02 00:49:52.540608 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-04-02 00:49:52.540612 | orchestrator | Thursday 02 April 2026 00:49:01 +0000 (0:00:07.821) 0:01:29.164 ******** 2026-04-02 00:49:52.540615 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:49:52.540619 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:49:52.540623 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:49:52.540627 | orchestrator | 2026-04-02 00:49:52.540630 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-04-02 00:49:52.540634 | orchestrator | Thursday 02 April 2026 00:49:09 +0000 (0:00:08.080) 0:01:37.245 ******** 2026-04-02 00:49:52.540638 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:49:52.540642 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:49:52.540646 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:49:52.540650 | orchestrator | 2026-04-02 00:49:52.540653 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-04-02 00:49:52.540657 | orchestrator | Thursday 02 April 2026 00:49:12 +0000 (0:00:02.556) 0:01:39.801 ******** 2026-04-02 00:49:52.540661 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:49:52.540665 | orchestrator | 2026-04-02 00:49:52.540669 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-04-02 00:49:52.540672 | orchestrator | Thursday 02 April 2026 00:49:12 +0000 (0:00:00.108) 0:01:39.910 ******** 2026-04-02 00:49:52.540676 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:49:52.540680 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:49:52.540684 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:49:52.540688 | orchestrator | 2026-04-02 00:49:52.540691 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-04-02 00:49:52.540698 | orchestrator | Thursday 02 April 2026 00:49:13 +0000 (0:00:00.749) 0:01:40.659 ******** 2026-04-02 00:49:52.540702 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:49:52.540705 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:49:52.540709 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:49:52.540713 | orchestrator | 2026-04-02 00:49:52.540717 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-04-02 00:49:52.540721 | orchestrator | Thursday 02 April 2026 00:49:13 +0000 (0:00:00.608) 0:01:41.268 ******** 2026-04-02 00:49:52.540725 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:49:52.540729 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:49:52.540732 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:49:52.540736 | orchestrator | 2026-04-02 00:49:52.540740 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-04-02 00:49:52.540744 | orchestrator | Thursday 02 April 2026 00:49:14 +0000 (0:00:00.908) 0:01:42.176 ******** 2026-04-02 00:49:52.540748 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:49:52.540751 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:49:52.540755 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:49:52.540759 | orchestrator | 2026-04-02 00:49:52.540763 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-04-02 00:49:52.540770 | orchestrator | Thursday 02 April 2026 00:49:15 +0000 (0:00:00.578) 0:01:42.754 ******** 2026-04-02 00:49:52.540774 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:49:52.540778 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:49:52.540784 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:49:52.540788 | orchestrator | 2026-04-02 00:49:52.540792 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-04-02 00:49:52.540796 | orchestrator | Thursday 02 April 2026 00:49:16 +0000 (0:00:00.745) 0:01:43.500 ******** 2026-04-02 00:49:52.540800 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:49:52.540803 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:49:52.540807 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:49:52.540811 | orchestrator | 2026-04-02 00:49:52.540815 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-04-02 00:49:52.540818 | orchestrator | Thursday 02 April 2026 00:49:16 +0000 (0:00:00.750) 0:01:44.251 ******** 2026-04-02 00:49:52.540822 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:49:52.540826 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:49:52.540830 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:49:52.540833 | orchestrator | 2026-04-02 00:49:52.540837 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-04-02 00:49:52.540841 | orchestrator | Thursday 02 April 2026 00:49:17 +0000 (0:00:00.440) 0:01:44.691 ******** 2026-04-02 00:49:52.540845 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:49:52.540849 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:49:52.540853 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:49:52.540857 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:49:52.540861 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:49:52.540865 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:49:52.540872 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:49:52.540879 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:49:52.540885 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:49:52.540889 | orchestrator | 2026-04-02 00:49:52.540893 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-04-02 00:49:52.540897 | orchestrator | Thursday 02 April 2026 00:49:18 +0000 (0:00:01.587) 0:01:46.279 ******** 2026-04-02 00:49:52.540901 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:49:52.540905 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:49:52.540909 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:49:52.540913 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:49:52.540917 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:49:52.540921 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:49:52.540925 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:49:52.540935 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:49:52.540940 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:49:52.540943 | orchestrator | 2026-04-02 00:49:52.540947 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-04-02 00:49:52.540951 | orchestrator | Thursday 02 April 2026 00:49:22 +0000 (0:00:03.679) 0:01:49.959 ******** 2026-04-02 00:49:52.540957 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:49:52.540961 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:49:52.540965 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:49:52.540969 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:49:52.540973 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:49:52.540977 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:49:52.540981 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:49:52.540988 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:49:52.540995 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 00:49:52.540999 | orchestrator | 2026-04-02 00:49:52.541003 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-02 00:49:52.541007 | orchestrator | Thursday 02 April 2026 00:49:25 +0000 (0:00:02.642) 0:01:52.601 ******** 2026-04-02 00:49:52.541011 | orchestrator | 2026-04-02 00:49:52.541015 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-02 00:49:52.541018 | orchestrator | Thursday 02 April 2026 00:49:25 +0000 (0:00:00.061) 0:01:52.663 ******** 2026-04-02 00:49:52.541022 | orchestrator | 2026-04-02 00:49:52.541026 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-02 00:49:52.541030 | orchestrator | Thursday 02 April 2026 00:49:25 +0000 (0:00:00.059) 0:01:52.722 ******** 2026-04-02 00:49:52.541034 | orchestrator | 2026-04-02 00:49:52.541037 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-04-02 00:49:52.541041 | orchestrator | Thursday 02 April 2026 00:49:25 +0000 (0:00:00.181) 0:01:52.903 ******** 2026-04-02 00:49:52.541045 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:49:52.541049 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:49:52.541053 | orchestrator | 2026-04-02 00:49:52.541059 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-04-02 00:49:52.541063 | orchestrator | Thursday 02 April 2026 00:49:31 +0000 (0:00:06.139) 0:01:59.042 ******** 2026-04-02 00:49:52.541067 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:49:52.541070 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:49:52.541074 | orchestrator | 2026-04-02 00:49:52.541078 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-04-02 00:49:52.541082 | orchestrator | Thursday 02 April 2026 00:49:38 +0000 (0:00:06.631) 0:02:05.674 ******** 2026-04-02 00:49:52.541086 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:49:52.541089 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:49:52.541093 | orchestrator | 2026-04-02 00:49:52.541097 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-04-02 00:49:52.541101 | orchestrator | Thursday 02 April 2026 00:49:44 +0000 (0:00:06.292) 0:02:11.967 ******** 2026-04-02 00:49:52.541104 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:49:52.541108 | orchestrator | 2026-04-02 00:49:52.541112 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-04-02 00:49:52.541116 | orchestrator | Thursday 02 April 2026 00:49:44 +0000 (0:00:00.107) 0:02:12.074 ******** 2026-04-02 00:49:52.541119 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:49:52.541123 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:49:52.541127 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:49:52.541131 | orchestrator | 2026-04-02 00:49:52.541135 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-04-02 00:49:52.541139 | orchestrator | Thursday 02 April 2026 00:49:45 +0000 (0:00:00.708) 0:02:12.783 ******** 2026-04-02 00:49:52.541142 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:49:52.541146 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:49:52.541150 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:49:52.541154 | orchestrator | 2026-04-02 00:49:52.541157 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-04-02 00:49:52.541165 | orchestrator | Thursday 02 April 2026 00:49:45 +0000 (0:00:00.554) 0:02:13.337 ******** 2026-04-02 00:49:52.541169 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:49:52.541173 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:49:52.541177 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:49:52.541180 | orchestrator | 2026-04-02 00:49:52.541184 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-04-02 00:49:52.541188 | orchestrator | Thursday 02 April 2026 00:49:46 +0000 (0:00:00.683) 0:02:14.021 ******** 2026-04-02 00:49:52.541192 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:49:52.541195 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:49:52.541199 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:49:52.541250 | orchestrator | 2026-04-02 00:49:52.541254 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-04-02 00:49:52.541257 | orchestrator | Thursday 02 April 2026 00:49:47 +0000 (0:00:00.614) 0:02:14.636 ******** 2026-04-02 00:49:52.541261 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:49:52.541265 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:49:52.541269 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:49:52.541273 | orchestrator | 2026-04-02 00:49:52.541276 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-04-02 00:49:52.541280 | orchestrator | Thursday 02 April 2026 00:49:47 +0000 (0:00:00.687) 0:02:15.324 ******** 2026-04-02 00:49:52.541284 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:49:52.541288 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:49:52.541291 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:49:52.541295 | orchestrator | 2026-04-02 00:49:52.541299 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-02 00:49:52.541303 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-04-02 00:49:52.541307 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-04-02 00:49:52.541311 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-04-02 00:49:52.541315 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-02 00:49:52.541319 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-02 00:49:52.541325 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-02 00:49:52.541329 | orchestrator | 2026-04-02 00:49:52.541333 | orchestrator | 2026-04-02 00:49:52.541337 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-02 00:49:52.541341 | orchestrator | Thursday 02 April 2026 00:49:49 +0000 (0:00:01.441) 0:02:16.765 ******** 2026-04-02 00:49:52.541344 | orchestrator | =============================================================================== 2026-04-02 00:49:52.541348 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 27.61s 2026-04-02 00:49:52.541352 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 20.63s 2026-04-02 00:49:52.541355 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 14.71s 2026-04-02 00:49:52.541359 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 13.96s 2026-04-02 00:49:52.541363 | orchestrator | ovn-db : Restart ovn-northd container ----------------------------------- 8.85s 2026-04-02 00:49:52.541367 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.83s 2026-04-02 00:49:52.541370 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.68s 2026-04-02 00:49:52.541380 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.64s 2026-04-02 00:49:52.541385 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.40s 2026-04-02 00:49:52.541388 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.20s 2026-04-02 00:49:52.541392 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 2.01s 2026-04-02 00:49:52.541396 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.94s 2026-04-02 00:49:52.541400 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.61s 2026-04-02 00:49:52.541403 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.59s 2026-04-02 00:49:52.541407 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.58s 2026-04-02 00:49:52.541411 | orchestrator | ovn-db : Wait for ovn-sb-db --------------------------------------------- 1.44s 2026-04-02 00:49:52.541415 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.41s 2026-04-02 00:49:52.541419 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.25s 2026-04-02 00:49:52.541422 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.14s 2026-04-02 00:49:52.541426 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.14s 2026-04-02 00:49:52.541430 | orchestrator | 2026-04-02 00:49:52 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:49:55.573465 | orchestrator | 2026-04-02 00:49:55 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:49:55.574905 | orchestrator | 2026-04-02 00:49:55 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:49:55.574996 | orchestrator | 2026-04-02 00:49:55 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:49:58.613867 | orchestrator | 2026-04-02 00:49:58 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:49:58.615076 | orchestrator | 2026-04-02 00:49:58 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:49:58.615298 | orchestrator | 2026-04-02 00:49:58 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:50:01.647726 | orchestrator | 2026-04-02 00:50:01 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:50:01.648103 | orchestrator | 2026-04-02 00:50:01 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:50:01.648124 | orchestrator | 2026-04-02 00:50:01 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:50:04.690325 | orchestrator | 2026-04-02 00:50:04 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:50:04.691045 | orchestrator | 2026-04-02 00:50:04 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:50:04.691084 | orchestrator | 2026-04-02 00:50:04 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:50:07.727672 | orchestrator | 2026-04-02 00:50:07 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:50:07.730270 | orchestrator | 2026-04-02 00:50:07 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:50:07.730322 | orchestrator | 2026-04-02 00:50:07 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:50:10.760368 | orchestrator | 2026-04-02 00:50:10 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:50:10.760456 | orchestrator | 2026-04-02 00:50:10 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:50:10.761517 | orchestrator | 2026-04-02 00:50:10 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:50:13.798582 | orchestrator | 2026-04-02 00:50:13 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:50:13.799907 | orchestrator | 2026-04-02 00:50:13 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:50:13.799960 | orchestrator | 2026-04-02 00:50:13 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:50:16.846433 | orchestrator | 2026-04-02 00:50:16 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:50:16.848819 | orchestrator | 2026-04-02 00:50:16 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:50:16.848987 | orchestrator | 2026-04-02 00:50:16 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:50:19.905981 | orchestrator | 2026-04-02 00:50:19 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:50:19.906986 | orchestrator | 2026-04-02 00:50:19 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:50:19.907014 | orchestrator | 2026-04-02 00:50:19 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:50:22.949500 | orchestrator | 2026-04-02 00:50:22 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:50:22.950995 | orchestrator | 2026-04-02 00:50:22 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:50:22.951098 | orchestrator | 2026-04-02 00:50:22 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:50:25.985938 | orchestrator | 2026-04-02 00:50:25 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:50:25.987741 | orchestrator | 2026-04-02 00:50:25 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:50:25.987804 | orchestrator | 2026-04-02 00:50:25 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:50:29.026570 | orchestrator | 2026-04-02 00:50:29 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:50:29.028888 | orchestrator | 2026-04-02 00:50:29 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:50:29.028990 | orchestrator | 2026-04-02 00:50:29 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:50:32.072497 | orchestrator | 2026-04-02 00:50:32 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:50:32.074167 | orchestrator | 2026-04-02 00:50:32 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:50:32.074266 | orchestrator | 2026-04-02 00:50:32 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:50:35.103115 | orchestrator | 2026-04-02 00:50:35 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:50:35.105142 | orchestrator | 2026-04-02 00:50:35 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:50:35.105276 | orchestrator | 2026-04-02 00:50:35 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:50:38.147744 | orchestrator | 2026-04-02 00:50:38 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:50:38.148921 | orchestrator | 2026-04-02 00:50:38 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:50:38.148960 | orchestrator | 2026-04-02 00:50:38 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:50:41.192849 | orchestrator | 2026-04-02 00:50:41 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:50:41.193018 | orchestrator | 2026-04-02 00:50:41 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:50:41.193035 | orchestrator | 2026-04-02 00:50:41 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:50:44.229447 | orchestrator | 2026-04-02 00:50:44 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:50:44.231467 | orchestrator | 2026-04-02 00:50:44 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:50:44.231549 | orchestrator | 2026-04-02 00:50:44 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:50:47.269267 | orchestrator | 2026-04-02 00:50:47 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:50:47.269543 | orchestrator | 2026-04-02 00:50:47 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:50:47.269563 | orchestrator | 2026-04-02 00:50:47 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:50:50.307518 | orchestrator | 2026-04-02 00:50:50 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:50:50.310613 | orchestrator | 2026-04-02 00:50:50 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:50:50.310666 | orchestrator | 2026-04-02 00:50:50 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:50:53.360067 | orchestrator | 2026-04-02 00:50:53 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:50:53.362534 | orchestrator | 2026-04-02 00:50:53 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:50:53.362814 | orchestrator | 2026-04-02 00:50:53 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:50:56.404107 | orchestrator | 2026-04-02 00:50:56 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:50:56.404194 | orchestrator | 2026-04-02 00:50:56 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:50:56.404204 | orchestrator | 2026-04-02 00:50:56 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:50:59.441961 | orchestrator | 2026-04-02 00:50:59 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:50:59.444138 | orchestrator | 2026-04-02 00:50:59 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:50:59.444296 | orchestrator | 2026-04-02 00:50:59 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:51:02.500967 | orchestrator | 2026-04-02 00:51:02 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:51:02.502988 | orchestrator | 2026-04-02 00:51:02 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:51:02.503067 | orchestrator | 2026-04-02 00:51:02 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:51:05.535668 | orchestrator | 2026-04-02 00:51:05 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:51:05.538701 | orchestrator | 2026-04-02 00:51:05 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:51:05.539394 | orchestrator | 2026-04-02 00:51:05 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:51:08.581505 | orchestrator | 2026-04-02 00:51:08 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:51:08.583232 | orchestrator | 2026-04-02 00:51:08 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:51:08.583277 | orchestrator | 2026-04-02 00:51:08 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:51:11.622001 | orchestrator | 2026-04-02 00:51:11 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:51:11.623581 | orchestrator | 2026-04-02 00:51:11 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:51:11.623685 | orchestrator | 2026-04-02 00:51:11 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:51:14.657909 | orchestrator | 2026-04-02 00:51:14 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:51:14.658371 | orchestrator | 2026-04-02 00:51:14 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:51:14.658400 | orchestrator | 2026-04-02 00:51:14 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:51:17.695241 | orchestrator | 2026-04-02 00:51:17 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:51:17.698322 | orchestrator | 2026-04-02 00:51:17 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:51:17.698390 | orchestrator | 2026-04-02 00:51:17 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:51:20.740389 | orchestrator | 2026-04-02 00:51:20 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:51:20.740778 | orchestrator | 2026-04-02 00:51:20 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:51:20.740804 | orchestrator | 2026-04-02 00:51:20 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:51:23.780418 | orchestrator | 2026-04-02 00:51:23 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:51:23.781929 | orchestrator | 2026-04-02 00:51:23 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:51:23.782176 | orchestrator | 2026-04-02 00:51:23 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:51:26.833537 | orchestrator | 2026-04-02 00:51:26 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:51:26.835386 | orchestrator | 2026-04-02 00:51:26 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:51:26.835568 | orchestrator | 2026-04-02 00:51:26 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:51:29.881773 | orchestrator | 2026-04-02 00:51:29 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:51:29.882646 | orchestrator | 2026-04-02 00:51:29 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:51:29.882814 | orchestrator | 2026-04-02 00:51:29 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:51:32.914639 | orchestrator | 2026-04-02 00:51:32 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:51:32.916818 | orchestrator | 2026-04-02 00:51:32 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:51:32.916893 | orchestrator | 2026-04-02 00:51:32 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:51:35.971272 | orchestrator | 2026-04-02 00:51:35 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:51:35.972140 | orchestrator | 2026-04-02 00:51:35 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:51:35.972251 | orchestrator | 2026-04-02 00:51:35 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:51:39.011824 | orchestrator | 2026-04-02 00:51:39 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:51:39.014976 | orchestrator | 2026-04-02 00:51:39 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:51:39.015035 | orchestrator | 2026-04-02 00:51:39 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:51:42.056559 | orchestrator | 2026-04-02 00:51:42 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:51:42.058117 | orchestrator | 2026-04-02 00:51:42 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:51:42.058692 | orchestrator | 2026-04-02 00:51:42 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:51:45.102811 | orchestrator | 2026-04-02 00:51:45 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:51:45.102879 | orchestrator | 2026-04-02 00:51:45 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:51:45.102891 | orchestrator | 2026-04-02 00:51:45 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:51:48.148146 | orchestrator | 2026-04-02 00:51:48 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:51:48.148330 | orchestrator | 2026-04-02 00:51:48 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:51:48.148345 | orchestrator | 2026-04-02 00:51:48 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:51:51.184942 | orchestrator | 2026-04-02 00:51:51 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:51:51.186308 | orchestrator | 2026-04-02 00:51:51 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:51:51.186412 | orchestrator | 2026-04-02 00:51:51 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:51:54.218328 | orchestrator | 2026-04-02 00:51:54 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:51:54.218430 | orchestrator | 2026-04-02 00:51:54 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:51:54.218441 | orchestrator | 2026-04-02 00:51:54 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:51:57.264570 | orchestrator | 2026-04-02 00:51:57 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:51:57.266061 | orchestrator | 2026-04-02 00:51:57 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:51:57.266155 | orchestrator | 2026-04-02 00:51:57 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:52:00.309783 | orchestrator | 2026-04-02 00:52:00 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:52:00.311050 | orchestrator | 2026-04-02 00:52:00 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:52:00.311098 | orchestrator | 2026-04-02 00:52:00 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:52:03.355787 | orchestrator | 2026-04-02 00:52:03 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:52:03.357931 | orchestrator | 2026-04-02 00:52:03 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:52:03.358006 | orchestrator | 2026-04-02 00:52:03 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:52:06.392608 | orchestrator | 2026-04-02 00:52:06 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:52:06.393949 | orchestrator | 2026-04-02 00:52:06 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:52:06.394081 | orchestrator | 2026-04-02 00:52:06 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:52:09.436676 | orchestrator | 2026-04-02 00:52:09 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:52:09.440242 | orchestrator | 2026-04-02 00:52:09 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:52:09.440326 | orchestrator | 2026-04-02 00:52:09 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:52:12.480752 | orchestrator | 2026-04-02 00:52:12 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:52:12.481767 | orchestrator | 2026-04-02 00:52:12 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:52:12.481806 | orchestrator | 2026-04-02 00:52:12 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:52:15.513759 | orchestrator | 2026-04-02 00:52:15 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:52:15.514961 | orchestrator | 2026-04-02 00:52:15 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:52:15.515037 | orchestrator | 2026-04-02 00:52:15 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:52:18.550993 | orchestrator | 2026-04-02 00:52:18 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state STARTED 2026-04-02 00:52:18.551112 | orchestrator | 2026-04-02 00:52:18 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:52:18.551134 | orchestrator | 2026-04-02 00:52:18 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:52:21.592394 | orchestrator | 2026-04-02 00:52:21 | INFO  | Task fe1ec3f3-de0d-4a28-86ca-35bf7b77cbbd is in state SUCCESS 2026-04-02 00:52:21.594410 | orchestrator | 2026-04-02 00:52:21.594462 | orchestrator | 2026-04-02 00:52:21.594468 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-02 00:52:21.594474 | orchestrator | 2026-04-02 00:52:21.594478 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-02 00:52:21.594483 | orchestrator | Thursday 02 April 2026 00:46:24 +0000 (0:00:00.333) 0:00:00.333 ******** 2026-04-02 00:52:21.594487 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:52:21.594493 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:52:21.594497 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:52:21.594501 | orchestrator | 2026-04-02 00:52:21.594505 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-02 00:52:21.594510 | orchestrator | Thursday 02 April 2026 00:46:24 +0000 (0:00:00.421) 0:00:00.754 ******** 2026-04-02 00:52:21.594514 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-04-02 00:52:21.594518 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-04-02 00:52:21.594522 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-04-02 00:52:21.594526 | orchestrator | 2026-04-02 00:52:21.594530 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-04-02 00:52:21.594533 | orchestrator | 2026-04-02 00:52:21.594537 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-04-02 00:52:21.594541 | orchestrator | Thursday 02 April 2026 00:46:25 +0000 (0:00:00.419) 0:00:01.173 ******** 2026-04-02 00:52:21.594545 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 00:52:21.594549 | orchestrator | 2026-04-02 00:52:21.594553 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-04-02 00:52:21.594557 | orchestrator | Thursday 02 April 2026 00:46:26 +0000 (0:00:00.794) 0:00:01.968 ******** 2026-04-02 00:52:21.594561 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:52:21.594565 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:52:21.594568 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:52:21.594572 | orchestrator | 2026-04-02 00:52:21.594576 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-04-02 00:52:21.594580 | orchestrator | Thursday 02 April 2026 00:46:27 +0000 (0:00:01.100) 0:00:03.069 ******** 2026-04-02 00:52:21.594609 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 00:52:21.594612 | orchestrator | 2026-04-02 00:52:21.594616 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-04-02 00:52:21.594641 | orchestrator | Thursday 02 April 2026 00:46:27 +0000 (0:00:00.515) 0:00:03.585 ******** 2026-04-02 00:52:21.594698 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:52:21.594703 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:52:21.594707 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:52:21.594711 | orchestrator | 2026-04-02 00:52:21.594715 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-04-02 00:52:21.594778 | orchestrator | Thursday 02 April 2026 00:46:28 +0000 (0:00:00.954) 0:00:04.539 ******** 2026-04-02 00:52:21.594782 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-02 00:52:21.594796 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-02 00:52:21.594801 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-02 00:52:21.594814 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-02 00:52:21.594818 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-02 00:52:21.594829 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-02 00:52:21.594833 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-02 00:52:21.594837 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-02 00:52:21.594841 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-02 00:52:21.594845 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-02 00:52:21.594848 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-02 00:52:21.594852 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-02 00:52:21.594856 | orchestrator | 2026-04-02 00:52:21.594860 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-02 00:52:21.594864 | orchestrator | Thursday 02 April 2026 00:46:33 +0000 (0:00:04.553) 0:00:09.093 ******** 2026-04-02 00:52:21.594868 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-04-02 00:52:21.594872 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-04-02 00:52:21.594876 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-04-02 00:52:21.594880 | orchestrator | 2026-04-02 00:52:21.594884 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-02 00:52:21.594887 | orchestrator | Thursday 02 April 2026 00:46:34 +0000 (0:00:01.030) 0:00:10.123 ******** 2026-04-02 00:52:21.594891 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-04-02 00:52:21.594895 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-04-02 00:52:21.594899 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-04-02 00:52:21.594903 | orchestrator | 2026-04-02 00:52:21.594906 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-02 00:52:21.594910 | orchestrator | Thursday 02 April 2026 00:46:35 +0000 (0:00:01.438) 0:00:11.562 ******** 2026-04-02 00:52:21.594914 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-04-02 00:52:21.594918 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.594933 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-04-02 00:52:21.594937 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.594941 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-04-02 00:52:21.594945 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.594949 | orchestrator | 2026-04-02 00:52:21.594953 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-04-02 00:52:21.594957 | orchestrator | Thursday 02 April 2026 00:46:36 +0000 (0:00:00.639) 0:00:12.201 ******** 2026-04-02 00:52:21.594962 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-02 00:52:21.594977 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-02 00:52:21.594984 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-02 00:52:21.594989 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-02 00:52:21.594994 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-02 00:52:21.595001 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-02 00:52:21.595006 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-02 00:52:21.595015 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-02 00:52:21.595019 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-02 00:52:21.595023 | orchestrator | 2026-04-02 00:52:21.595027 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-04-02 00:52:21.595031 | orchestrator | Thursday 02 April 2026 00:46:38 +0000 (0:00:01.681) 0:00:13.883 ******** 2026-04-02 00:52:21.595035 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:52:21.595039 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:52:21.595042 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:52:21.595046 | orchestrator | 2026-04-02 00:52:21.595050 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-04-02 00:52:21.595054 | orchestrator | Thursday 02 April 2026 00:46:39 +0000 (0:00:01.338) 0:00:15.221 ******** 2026-04-02 00:52:21.595058 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-04-02 00:52:21.595061 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-04-02 00:52:21.595065 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-04-02 00:52:21.595072 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-04-02 00:52:21.595076 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-04-02 00:52:21.595079 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-04-02 00:52:21.595083 | orchestrator | 2026-04-02 00:52:21.595087 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-04-02 00:52:21.595091 | orchestrator | Thursday 02 April 2026 00:46:41 +0000 (0:00:02.289) 0:00:17.511 ******** 2026-04-02 00:52:21.595095 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:52:21.595099 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:52:21.595105 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:52:21.595112 | orchestrator | 2026-04-02 00:52:21.595118 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-04-02 00:52:21.595124 | orchestrator | Thursday 02 April 2026 00:46:43 +0000 (0:00:01.487) 0:00:18.998 ******** 2026-04-02 00:52:21.595134 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:52:21.595142 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:52:21.595148 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:52:21.595154 | orchestrator | 2026-04-02 00:52:21.595161 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-04-02 00:52:21.595167 | orchestrator | Thursday 02 April 2026 00:46:44 +0000 (0:00:01.745) 0:00:20.744 ******** 2026-04-02 00:52:21.595173 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-02 00:52:21.595210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-02 00:52:21.595218 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-02 00:52:21.595227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__4c5f8ccb39f01dba3ea7e5533b5aae6dd88e29bd', '__omit_place_holder__4c5f8ccb39f01dba3ea7e5533b5aae6dd88e29bd'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-02 00:52:21.595233 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.595239 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-02 00:52:21.595249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-02 00:52:21.595255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-02 00:52:21.595266 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__4c5f8ccb39f01dba3ea7e5533b5aae6dd88e29bd', '__omit_place_holder__4c5f8ccb39f01dba3ea7e5533b5aae6dd88e29bd'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-02 00:52:21.595273 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.595285 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-02 00:52:21.595292 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-02 00:52:21.595304 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-02 00:52:21.595314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__4c5f8ccb39f01dba3ea7e5533b5aae6dd88e29bd', '__omit_place_holder__4c5f8ccb39f01dba3ea7e5533b5aae6dd88e29bd'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-02 00:52:21.595320 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.595326 | orchestrator | 2026-04-02 00:52:21.595333 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-04-02 00:52:21.595339 | orchestrator | Thursday 02 April 2026 00:46:45 +0000 (0:00:00.723) 0:00:21.468 ******** 2026-04-02 00:52:21.595345 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-02 00:52:21.595356 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-02 00:52:21.595369 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-02 00:52:21.595376 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-02 00:52:21.595380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-02 00:52:21.595385 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__4c5f8ccb39f01dba3ea7e5533b5aae6dd88e29bd', '__omit_place_holder__4c5f8ccb39f01dba3ea7e5533b5aae6dd88e29bd'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-02 00:52:21.595395 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-02 00:52:21.595410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-02 00:52:21.595416 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-02 00:52:21.595426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__4c5f8ccb39f01dba3ea7e5533b5aae6dd88e29bd', '__omit_place_holder__4c5f8ccb39f01dba3ea7e5533b5aae6dd88e29bd'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-02 00:52:21.595432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-02 00:52:21.595438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__4c5f8ccb39f01dba3ea7e5533b5aae6dd88e29bd', '__omit_place_holder__4c5f8ccb39f01dba3ea7e5533b5aae6dd88e29bd'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-02 00:52:21.595445 | orchestrator | 2026-04-02 00:52:21.595452 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-04-02 00:52:21.595458 | orchestrator | Thursday 02 April 2026 00:46:48 +0000 (0:00:02.874) 0:00:24.342 ******** 2026-04-02 00:52:21.595467 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-02 00:52:21.595481 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-02 00:52:21.595487 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-02 00:52:21.595499 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-02 00:52:21.595715 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-02 00:52:21.595724 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-02 00:52:21.595730 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-02 00:52:21.595741 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-02 00:52:21.595754 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-02 00:52:21.595771 | orchestrator | 2026-04-02 00:52:21.595785 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-04-02 00:52:21.595792 | orchestrator | Thursday 02 April 2026 00:46:51 +0000 (0:00:03.143) 0:00:27.485 ******** 2026-04-02 00:52:21.595799 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-02 00:52:21.595806 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-02 00:52:21.595811 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-02 00:52:21.595817 | orchestrator | 2026-04-02 00:52:21.595822 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-04-02 00:52:21.595829 | orchestrator | Thursday 02 April 2026 00:46:54 +0000 (0:00:02.795) 0:00:30.281 ******** 2026-04-02 00:52:21.595835 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-02 00:52:21.595841 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-02 00:52:21.595847 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-02 00:52:21.595853 | orchestrator | 2026-04-02 00:52:21.595866 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-04-02 00:52:21.595870 | orchestrator | Thursday 02 April 2026 00:46:58 +0000 (0:00:03.745) 0:00:34.027 ******** 2026-04-02 00:52:21.595876 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.595882 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.595890 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.595898 | orchestrator | 2026-04-02 00:52:21.595904 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-04-02 00:52:21.595910 | orchestrator | Thursday 02 April 2026 00:46:59 +0000 (0:00:01.274) 0:00:35.301 ******** 2026-04-02 00:52:21.595916 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-02 00:52:21.595922 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-02 00:52:21.595928 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-02 00:52:21.595933 | orchestrator | 2026-04-02 00:52:21.595939 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-04-02 00:52:21.595945 | orchestrator | Thursday 02 April 2026 00:47:02 +0000 (0:00:02.646) 0:00:37.947 ******** 2026-04-02 00:52:21.595950 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-02 00:52:21.595956 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-02 00:52:21.595962 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-02 00:52:21.595974 | orchestrator | 2026-04-02 00:52:21.595981 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-04-02 00:52:21.595987 | orchestrator | Thursday 02 April 2026 00:47:05 +0000 (0:00:03.434) 0:00:41.381 ******** 2026-04-02 00:52:21.595993 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-04-02 00:52:21.596000 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-04-02 00:52:21.596005 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-04-02 00:52:21.596010 | orchestrator | 2026-04-02 00:52:21.596016 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-04-02 00:52:21.596021 | orchestrator | Thursday 02 April 2026 00:47:08 +0000 (0:00:03.091) 0:00:44.473 ******** 2026-04-02 00:52:21.596026 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-04-02 00:52:21.596032 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-04-02 00:52:21.596037 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-04-02 00:52:21.596043 | orchestrator | 2026-04-02 00:52:21.596049 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-04-02 00:52:21.596060 | orchestrator | Thursday 02 April 2026 00:47:11 +0000 (0:00:02.913) 0:00:47.386 ******** 2026-04-02 00:52:21.596067 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 00:52:21.596072 | orchestrator | 2026-04-02 00:52:21.596078 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2026-04-02 00:52:21.596085 | orchestrator | Thursday 02 April 2026 00:47:12 +0000 (0:00:01.164) 0:00:48.551 ******** 2026-04-02 00:52:21.596091 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-02 00:52:21.596097 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-02 00:52:21.596110 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-02 00:52:21.596117 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-02 00:52:21.596129 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-02 00:52:21.596138 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-02 00:52:21.596145 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-02 00:52:21.596151 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-02 00:52:21.596156 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-02 00:52:21.596162 | orchestrator | 2026-04-02 00:52:21.596167 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2026-04-02 00:52:21.596198 | orchestrator | Thursday 02 April 2026 00:47:16 +0000 (0:00:03.681) 0:00:52.232 ******** 2026-04-02 00:52:21.596214 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-02 00:52:21.596580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-02 00:52:21.596598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-02 00:52:21.596604 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.596614 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-02 00:52:21.596619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-02 00:52:21.596624 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-02 00:52:21.596628 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.596633 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-02 00:52:21.596643 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-02 00:52:21.596652 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-02 00:52:21.596656 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.596660 | orchestrator | 2026-04-02 00:52:21.596665 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2026-04-02 00:52:21.596670 | orchestrator | Thursday 02 April 2026 00:47:17 +0000 (0:00:01.113) 0:00:53.345 ******** 2026-04-02 00:52:21.596675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-02 00:52:21.596682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-02 00:52:21.596687 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-02 00:52:21.596692 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.596696 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-02 00:52:21.596705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-02 00:52:21.596713 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-02 00:52:21.596718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-02 00:52:21.596722 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-02 00:52:21.596727 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.596735 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-02 00:52:21.596739 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.596742 | orchestrator | 2026-04-02 00:52:21.596747 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-04-02 00:52:21.596750 | orchestrator | Thursday 02 April 2026 00:47:19 +0000 (0:00:02.108) 0:00:55.454 ******** 2026-04-02 00:52:21.596754 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-02 00:52:21.596761 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-02 00:52:21.596768 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-02 00:52:21.596772 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.596776 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-02 00:52:21.596780 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-02 00:52:21.596787 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-02 00:52:21.596791 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.596794 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-02 00:52:21.596798 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-02 00:52:21.596811 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-02 00:52:21.596815 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.596819 | orchestrator | 2026-04-02 00:52:21.596822 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-04-02 00:52:21.596826 | orchestrator | Thursday 02 April 2026 00:47:21 +0000 (0:00:01.615) 0:00:57.070 ******** 2026-04-02 00:52:21.596830 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-02 00:52:21.596834 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-02 00:52:21.596838 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-02 00:52:21.596842 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.596848 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-02 00:52:21.596852 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-02 00:52:21.596859 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-02 00:52:21.596863 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.596870 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-02 00:52:21.596874 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-02 00:52:21.596878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-02 00:52:21.596882 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.596885 | orchestrator | 2026-04-02 00:52:21.596889 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-04-02 00:52:21.596893 | orchestrator | Thursday 02 April 2026 00:47:22 +0000 (0:00:01.470) 0:00:58.540 ******** 2026-04-02 00:52:21.596899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-02 00:52:21.596903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-02 00:52:21.596910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-02 00:52:21.596914 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.596920 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-02 00:52:21.596924 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-02 00:52:21.596928 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-02 00:52:21.596932 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.596936 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-02 00:52:21.596941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-02 00:52:21.596948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-02 00:52:21.596952 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.596956 | orchestrator | 2026-04-02 00:52:21.596959 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2026-04-02 00:52:21.596963 | orchestrator | Thursday 02 April 2026 00:47:23 +0000 (0:00:01.299) 0:00:59.840 ******** 2026-04-02 00:52:21.596989 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-02 00:52:21.596996 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-02 00:52:21.597001 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-02 00:52:21.597005 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.597008 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-02 00:52:21.597015 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-02 00:52:21.597023 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-02 00:52:21.597027 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.597030 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-02 00:52:21.597036 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-02 00:52:21.597040 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-02 00:52:21.597044 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.597048 | orchestrator | 2026-04-02 00:52:21.597052 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2026-04-02 00:52:21.597055 | orchestrator | Thursday 02 April 2026 00:47:24 +0000 (0:00:00.655) 0:01:00.496 ******** 2026-04-02 00:52:21.597059 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-02 00:52:21.597063 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-02 00:52:21.597145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-02 00:52:21.597151 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.597155 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-02 00:52:21.597159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-02 00:52:21.597168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-02 00:52:21.597172 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.597227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-02 00:52:21.597234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-02 00:52:21.597246 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-02 00:52:21.597252 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.597259 | orchestrator | 2026-04-02 00:52:21.597269 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2026-04-02 00:52:21.597273 | orchestrator | Thursday 02 April 2026 00:47:25 +0000 (0:00:00.709) 0:01:01.205 ******** 2026-04-02 00:52:21.597277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-02 00:52:21.597281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-02 00:52:21.597288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-02 00:52:21.597294 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.597305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-02 00:52:21.597311 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-02 00:52:21.597322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-02 00:52:21.597329 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.597339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-02 00:52:21.597346 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-02 00:52:21.597353 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-02 00:52:21.597357 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.597361 | orchestrator | 2026-04-02 00:52:21.597365 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-04-02 00:52:21.597369 | orchestrator | Thursday 02 April 2026 00:47:26 +0000 (0:00:01.026) 0:01:02.232 ******** 2026-04-02 00:52:21.597372 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-02 00:52:21.597377 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-02 00:52:21.597383 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-02 00:52:21.597387 | orchestrator | 2026-04-02 00:52:21.597391 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-04-02 00:52:21.597395 | orchestrator | Thursday 02 April 2026 00:47:27 +0000 (0:00:01.453) 0:01:03.686 ******** 2026-04-02 00:52:21.597399 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-02 00:52:21.597403 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-02 00:52:21.597407 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-02 00:52:21.597410 | orchestrator | 2026-04-02 00:52:21.597414 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-04-02 00:52:21.597418 | orchestrator | Thursday 02 April 2026 00:47:29 +0000 (0:00:01.760) 0:01:05.446 ******** 2026-04-02 00:52:21.597427 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-02 00:52:21.597431 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-02 00:52:21.597435 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-02 00:52:21.597439 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.597442 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-02 00:52:21.597446 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-02 00:52:21.597450 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.597454 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-02 00:52:21.597458 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.597462 | orchestrator | 2026-04-02 00:52:21.597465 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2026-04-02 00:52:21.597469 | orchestrator | Thursday 02 April 2026 00:47:31 +0000 (0:00:01.709) 0:01:07.155 ******** 2026-04-02 00:52:21.597476 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-02 00:52:21.597480 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-02 00:52:21.597484 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-02 00:52:21.597492 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-02 00:52:21.597496 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-02 00:52:21.597503 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-02 00:52:21.597507 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-02 00:52:21.597514 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-02 00:52:21.597518 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-02 00:52:21.597522 | orchestrator | 2026-04-02 00:52:21.597526 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-04-02 00:52:21.597530 | orchestrator | Thursday 02 April 2026 00:47:34 +0000 (0:00:02.706) 0:01:09.862 ******** 2026-04-02 00:52:21.597534 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 00:52:21.597538 | orchestrator | 2026-04-02 00:52:21.597542 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-04-02 00:52:21.597545 | orchestrator | Thursday 02 April 2026 00:47:34 +0000 (0:00:00.581) 0:01:10.444 ******** 2026-04-02 00:52:21.597550 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-02 00:52:21.597560 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-02 00:52:21.597565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.597569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.597576 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-02 00:52:21.597580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-02 00:52:21.597584 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-02 00:52:21.597857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.597871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.597875 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-02 00:52:21.597884 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.597888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.597892 | orchestrator | 2026-04-02 00:52:21.597896 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-04-02 00:52:21.597900 | orchestrator | Thursday 02 April 2026 00:47:38 +0000 (0:00:04.301) 0:01:14.745 ******** 2026-04-02 00:52:21.597904 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-02 00:52:21.597918 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-02 00:52:21.597922 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.597926 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.597930 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.597936 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-02 00:52:21.597940 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-02 00:52:21.597944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.597953 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.597956 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.597963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-02 00:52:21.597967 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-02 00:52:21.597971 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.597978 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.597982 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.597985 | orchestrator | 2026-04-02 00:52:21.597989 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-04-02 00:52:21.597993 | orchestrator | Thursday 02 April 2026 00:47:39 +0000 (0:00:00.704) 0:01:15.449 ******** 2026-04-02 00:52:21.597997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-04-02 00:52:21.598006 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-04-02 00:52:21.598053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-04-02 00:52:21.598060 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.598064 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-04-02 00:52:21.598068 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.598072 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-04-02 00:52:21.598076 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-04-02 00:52:21.598080 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.598084 | orchestrator | 2026-04-02 00:52:21.598091 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-04-02 00:52:21.598095 | orchestrator | Thursday 02 April 2026 00:47:40 +0000 (0:00:00.897) 0:01:16.347 ******** 2026-04-02 00:52:21.598098 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:52:21.598102 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:52:21.598106 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:52:21.598110 | orchestrator | 2026-04-02 00:52:21.598113 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-04-02 00:52:21.598117 | orchestrator | Thursday 02 April 2026 00:47:42 +0000 (0:00:01.622) 0:01:17.969 ******** 2026-04-02 00:52:21.598121 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:52:21.598125 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:52:21.598128 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:52:21.598132 | orchestrator | 2026-04-02 00:52:21.598136 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-04-02 00:52:21.598140 | orchestrator | Thursday 02 April 2026 00:47:44 +0000 (0:00:02.040) 0:01:20.009 ******** 2026-04-02 00:52:21.598144 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 00:52:21.598147 | orchestrator | 2026-04-02 00:52:21.598151 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-04-02 00:52:21.598155 | orchestrator | Thursday 02 April 2026 00:47:44 +0000 (0:00:00.616) 0:01:20.626 ******** 2026-04-02 00:52:21.598159 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-02 00:52:21.598167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.598281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.598291 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-02 00:52:21.598302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.598308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.598314 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-02 00:52:21.598333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.598340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.598346 | orchestrator | 2026-04-02 00:52:21.598467 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-04-02 00:52:21.598474 | orchestrator | Thursday 02 April 2026 00:47:49 +0000 (0:00:05.158) 0:01:25.785 ******** 2026-04-02 00:52:21.598482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-02 00:52:21.598487 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.598491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.598496 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.598503 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-02 00:52:21.598511 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.598515 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.598519 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.598526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-02 00:52:21.598530 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.598534 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.598542 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.598546 | orchestrator | 2026-04-02 00:52:21.598550 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-04-02 00:52:21.598554 | orchestrator | Thursday 02 April 2026 00:47:50 +0000 (0:00:00.775) 0:01:26.560 ******** 2026-04-02 00:52:21.598561 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-02 00:52:21.598566 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-02 00:52:21.598570 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.598574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-02 00:52:21.598578 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-02 00:52:21.598582 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.598586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-02 00:52:21.598590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-02 00:52:21.598593 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.598597 | orchestrator | 2026-04-02 00:52:21.598601 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-04-02 00:52:21.598605 | orchestrator | Thursday 02 April 2026 00:47:51 +0000 (0:00:00.621) 0:01:27.182 ******** 2026-04-02 00:52:21.598609 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:52:21.598612 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:52:21.598616 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:52:21.598620 | orchestrator | 2026-04-02 00:52:21.598624 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-04-02 00:52:21.598628 | orchestrator | Thursday 02 April 2026 00:47:52 +0000 (0:00:01.125) 0:01:28.307 ******** 2026-04-02 00:52:21.598632 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:52:21.598635 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:52:21.598639 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:52:21.598643 | orchestrator | 2026-04-02 00:52:21.598649 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-04-02 00:52:21.598653 | orchestrator | Thursday 02 April 2026 00:47:54 +0000 (0:00:01.913) 0:01:30.221 ******** 2026-04-02 00:52:21.598657 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.598661 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.598664 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.598668 | orchestrator | 2026-04-02 00:52:21.598672 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-04-02 00:52:21.598676 | orchestrator | Thursday 02 April 2026 00:47:54 +0000 (0:00:00.258) 0:01:30.479 ******** 2026-04-02 00:52:21.598680 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 00:52:21.598687 | orchestrator | 2026-04-02 00:52:21.598691 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-04-02 00:52:21.598695 | orchestrator | Thursday 02 April 2026 00:47:55 +0000 (0:00:00.748) 0:01:31.228 ******** 2026-04-02 00:52:21.598699 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-04-02 00:52:21.598706 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-04-02 00:52:21.598710 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-04-02 00:52:21.598714 | orchestrator | 2026-04-02 00:52:21.598718 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-04-02 00:52:21.598722 | orchestrator | Thursday 02 April 2026 00:47:57 +0000 (0:00:02.492) 0:01:33.720 ******** 2026-04-02 00:52:21.598729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-04-02 00:52:21.598733 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.598737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-04-02 00:52:21.598744 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.598748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-04-02 00:52:21.598752 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.598755 | orchestrator | 2026-04-02 00:52:21.598759 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-04-02 00:52:21.598763 | orchestrator | Thursday 02 April 2026 00:47:59 +0000 (0:00:01.300) 0:01:35.021 ******** 2026-04-02 00:52:21.598770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-02 00:52:21.598777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-02 00:52:21.598781 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.598785 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-02 00:52:21.598789 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-02 00:52:21.598793 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.598799 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-02 00:52:21.598807 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-02 00:52:21.598811 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.598814 | orchestrator | 2026-04-02 00:52:21.598818 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-04-02 00:52:21.598822 | orchestrator | Thursday 02 April 2026 00:48:00 +0000 (0:00:01.561) 0:01:36.583 ******** 2026-04-02 00:52:21.598826 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.598830 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.598833 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.598837 | orchestrator | 2026-04-02 00:52:21.598841 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-04-02 00:52:21.598871 | orchestrator | Thursday 02 April 2026 00:48:01 +0000 (0:00:00.360) 0:01:36.944 ******** 2026-04-02 00:52:21.598876 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.598879 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.598883 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.598887 | orchestrator | 2026-04-02 00:52:21.598891 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-04-02 00:52:21.598895 | orchestrator | Thursday 02 April 2026 00:48:02 +0000 (0:00:01.042) 0:01:37.986 ******** 2026-04-02 00:52:21.598898 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 00:52:21.598902 | orchestrator | 2026-04-02 00:52:21.598925 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-04-02 00:52:21.598952 | orchestrator | Thursday 02 April 2026 00:48:02 +0000 (0:00:00.759) 0:01:38.745 ******** 2026-04-02 00:52:21.598959 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-02 00:52:21.598964 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.598997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.599010 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.599014 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-02 00:52:21.599018 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.599024 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-02 00:52:21.599029 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.599036 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.599042 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.599047 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.599051 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.599054 | orchestrator | 2026-04-02 00:52:21.599058 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-04-02 00:52:21.599062 | orchestrator | Thursday 02 April 2026 00:48:05 +0000 (0:00:02.955) 0:01:41.701 ******** 2026-04-02 00:52:21.599067 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-02 00:52:21.599076 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.599082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.599086 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.599090 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.599094 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-02 00:52:21.599125 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.599136 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.599148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.599154 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.599166 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-02 00:52:21.599190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.599197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.599207 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.599219 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.599224 | orchestrator | 2026-04-02 00:52:21.599229 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-04-02 00:52:21.599233 | orchestrator | Thursday 02 April 2026 00:48:06 +0000 (0:00:00.726) 0:01:42.427 ******** 2026-04-02 00:52:21.599238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-02 00:52:21.599243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-02 00:52:21.599248 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.599252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-02 00:52:21.599257 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-02 00:52:21.599262 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.599266 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-02 00:52:21.599273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-02 00:52:21.599278 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.599283 | orchestrator | 2026-04-02 00:52:21.599290 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-04-02 00:52:21.599298 | orchestrator | Thursday 02 April 2026 00:48:07 +0000 (0:00:01.343) 0:01:43.771 ******** 2026-04-02 00:52:21.599306 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:52:21.599312 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:52:21.599318 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:52:21.599323 | orchestrator | 2026-04-02 00:52:21.599329 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-04-02 00:52:21.599336 | orchestrator | Thursday 02 April 2026 00:48:09 +0000 (0:00:01.161) 0:01:44.932 ******** 2026-04-02 00:52:21.599342 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:52:21.599348 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:52:21.599356 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:52:21.599362 | orchestrator | 2026-04-02 00:52:21.599368 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-04-02 00:52:21.599374 | orchestrator | Thursday 02 April 2026 00:48:10 +0000 (0:00:01.797) 0:01:46.730 ******** 2026-04-02 00:52:21.599381 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.599387 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.599394 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.599400 | orchestrator | 2026-04-02 00:52:21.599406 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-04-02 00:52:21.599412 | orchestrator | Thursday 02 April 2026 00:48:11 +0000 (0:00:00.330) 0:01:47.061 ******** 2026-04-02 00:52:21.599419 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.599426 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.599439 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.599444 | orchestrator | 2026-04-02 00:52:21.599449 | orchestrator | TASK [include_role : designate] ************************************************ 2026-04-02 00:52:21.599453 | orchestrator | Thursday 02 April 2026 00:48:11 +0000 (0:00:00.283) 0:01:47.345 ******** 2026-04-02 00:52:21.599458 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 00:52:21.599462 | orchestrator | 2026-04-02 00:52:21.599467 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-04-02 00:52:21.599471 | orchestrator | Thursday 02 April 2026 00:48:12 +0000 (0:00:00.902) 0:01:48.248 ******** 2026-04-02 00:52:21.599480 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-02 00:52:21.599485 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-02 00:52:21.599489 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.599498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.599782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.599809 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.599814 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.599823 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-02 00:52:21.599827 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-02 00:52:21.599833 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.599839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.599850 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.599862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.599872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.599878 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-02 00:52:21.599885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-02 00:52:21.599892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.599900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.599908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.599914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.599918 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.599922 | orchestrator | 2026-04-02 00:52:21.599926 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-04-02 00:52:21.599930 | orchestrator | Thursday 02 April 2026 00:48:17 +0000 (0:00:05.048) 0:01:53.296 ******** 2026-04-02 00:52:21.599934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-02 00:52:21.599938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-02 00:52:21.599948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.599952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.599962 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.599966 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.599970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.599973 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.599978 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-02 00:52:21.599987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-02 00:52:21.599991 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-02 00:52:21.599998 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.600002 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-02 00:52:21.600006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.600010 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.600017 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.600024 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.600028 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.600035 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.600039 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.600042 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.600046 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.600050 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.600057 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.600061 | orchestrator | 2026-04-02 00:52:21.600065 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-04-02 00:52:21.600069 | orchestrator | Thursday 02 April 2026 00:48:18 +0000 (0:00:00.766) 0:01:54.062 ******** 2026-04-02 00:52:21.600074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-04-02 00:52:21.600079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-04-02 00:52:21.600087 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.600095 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-04-02 00:52:21.600103 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-04-02 00:52:21.600109 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.600114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-04-02 00:52:21.600120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-04-02 00:52:21.600126 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.600132 | orchestrator | 2026-04-02 00:52:21.600137 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-04-02 00:52:21.600143 | orchestrator | Thursday 02 April 2026 00:48:19 +0000 (0:00:01.314) 0:01:55.377 ******** 2026-04-02 00:52:21.600149 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:52:21.600156 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:52:21.600162 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:52:21.600168 | orchestrator | 2026-04-02 00:52:21.600195 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-04-02 00:52:21.600200 | orchestrator | Thursday 02 April 2026 00:48:20 +0000 (0:00:01.450) 0:01:56.828 ******** 2026-04-02 00:52:21.600204 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:52:21.600212 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:52:21.600216 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:52:21.600219 | orchestrator | 2026-04-02 00:52:21.600223 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-04-02 00:52:21.600227 | orchestrator | Thursday 02 April 2026 00:48:23 +0000 (0:00:02.080) 0:01:58.908 ******** 2026-04-02 00:52:21.600231 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.600235 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.600238 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.600242 | orchestrator | 2026-04-02 00:52:21.600246 | orchestrator | TASK [include_role : glance] *************************************************** 2026-04-02 00:52:21.600249 | orchestrator | Thursday 02 April 2026 00:48:23 +0000 (0:00:00.250) 0:01:59.159 ******** 2026-04-02 00:52:21.600253 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 00:52:21.600257 | orchestrator | 2026-04-02 00:52:21.600261 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-04-02 00:52:21.600317 | orchestrator | Thursday 02 April 2026 00:48:24 +0000 (0:00:00.844) 0:02:00.003 ******** 2026-04-02 00:52:21.600326 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-02 00:52:21.600384 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-02 00:52:21.600619 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-02 00:52:21.600652 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-02 00:52:21.600664 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-02 00:52:21.600676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-02 00:52:21.600681 | orchestrator | 2026-04-02 00:52:21.600686 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-04-02 00:52:21.600690 | orchestrator | Thursday 02 April 2026 00:48:29 +0000 (0:00:05.062) 0:02:05.066 ******** 2026-04-02 00:52:21.600698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-02 00:52:21.600710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-02 00:52:21.600715 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.600724 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-02 00:52:21.600735 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-02 00:52:21.600740 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.600751 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-02 00:52:21.600756 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-02 00:52:21.600764 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.600769 | orchestrator | 2026-04-02 00:52:21.600773 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-04-02 00:52:21.600777 | orchestrator | Thursday 02 April 2026 00:48:32 +0000 (0:00:03.503) 0:02:08.569 ******** 2026-04-02 00:52:21.600782 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-02 00:52:21.600790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-02 00:52:21.600795 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.600800 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-02 00:52:21.600807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-02 00:52:21.600814 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.600818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-02 00:52:21.600822 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-02 00:52:21.600826 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.600868 | orchestrator | 2026-04-02 00:52:21.600873 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-04-02 00:52:21.600877 | orchestrator | Thursday 02 April 2026 00:48:36 +0000 (0:00:03.839) 0:02:12.409 ******** 2026-04-02 00:52:21.600880 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:52:21.600884 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:52:21.600888 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:52:21.600892 | orchestrator | 2026-04-02 00:52:21.600896 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-04-02 00:52:21.600920 | orchestrator | Thursday 02 April 2026 00:48:37 +0000 (0:00:01.287) 0:02:13.696 ******** 2026-04-02 00:52:21.600926 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:52:21.600929 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:52:21.600933 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:52:21.600937 | orchestrator | 2026-04-02 00:52:21.600941 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-04-02 00:52:21.600980 | orchestrator | Thursday 02 April 2026 00:48:39 +0000 (0:00:02.089) 0:02:15.786 ******** 2026-04-02 00:52:21.600986 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.600990 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.600993 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.600997 | orchestrator | 2026-04-02 00:52:21.601001 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-04-02 00:52:21.601005 | orchestrator | Thursday 02 April 2026 00:48:40 +0000 (0:00:00.350) 0:02:16.137 ******** 2026-04-02 00:52:21.601010 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 00:52:21.601017 | orchestrator | 2026-04-02 00:52:21.601023 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-04-02 00:52:21.601032 | orchestrator | Thursday 02 April 2026 00:48:41 +0000 (0:00:01.083) 0:02:17.221 ******** 2026-04-02 00:52:21.601045 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-02 00:52:21.601061 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-02 00:52:21.601072 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-02 00:52:21.601079 | orchestrator | 2026-04-02 00:52:21.601085 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-04-02 00:52:21.601091 | orchestrator | Thursday 02 April 2026 00:48:45 +0000 (0:00:03.727) 0:02:20.948 ******** 2026-04-02 00:52:21.601098 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-02 00:52:21.601104 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.601111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-02 00:52:21.601117 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.601124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-02 00:52:21.601131 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.601138 | orchestrator | 2026-04-02 00:52:21.601153 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-04-02 00:52:21.601160 | orchestrator | Thursday 02 April 2026 00:48:45 +0000 (0:00:00.293) 0:02:21.241 ******** 2026-04-02 00:52:21.601168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-04-02 00:52:21.601226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-04-02 00:52:21.601232 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.601236 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-04-02 00:52:21.601240 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-04-02 00:52:21.601244 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.601247 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-04-02 00:52:21.601255 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-04-02 00:52:21.601259 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.601263 | orchestrator | 2026-04-02 00:52:21.601267 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-04-02 00:52:21.601271 | orchestrator | Thursday 02 April 2026 00:48:45 +0000 (0:00:00.589) 0:02:21.830 ******** 2026-04-02 00:52:21.601275 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:52:21.601279 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:52:21.601283 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:52:21.601286 | orchestrator | 2026-04-02 00:52:21.601290 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-04-02 00:52:21.601294 | orchestrator | Thursday 02 April 2026 00:48:47 +0000 (0:00:01.104) 0:02:22.935 ******** 2026-04-02 00:52:21.601298 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:52:21.601301 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:52:21.601305 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:52:21.601309 | orchestrator | 2026-04-02 00:52:21.601313 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-04-02 00:52:21.601317 | orchestrator | Thursday 02 April 2026 00:48:49 +0000 (0:00:02.111) 0:02:25.046 ******** 2026-04-02 00:52:21.601320 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.601324 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.601328 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.601332 | orchestrator | 2026-04-02 00:52:21.601335 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-04-02 00:52:21.601339 | orchestrator | Thursday 02 April 2026 00:48:49 +0000 (0:00:00.262) 0:02:25.309 ******** 2026-04-02 00:52:21.601343 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 00:52:21.601347 | orchestrator | 2026-04-02 00:52:21.601350 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-04-02 00:52:21.601354 | orchestrator | Thursday 02 April 2026 00:48:50 +0000 (0:00:00.923) 0:02:26.232 ******** 2026-04-02 00:52:21.601364 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-02 00:52:21.601377 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-02 00:52:21.601943 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-02 00:52:21.601970 | orchestrator | 2026-04-02 00:52:21.601975 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-04-02 00:52:21.601979 | orchestrator | Thursday 02 April 2026 00:48:53 +0000 (0:00:02.973) 0:02:29.205 ******** 2026-04-02 00:52:21.601986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-02 00:52:21.601993 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.602003 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-02 00:52:21.602009 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.602042 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-02 00:52:21.602050 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.602054 | orchestrator | 2026-04-02 00:52:21.602057 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-04-02 00:52:21.602111 | orchestrator | Thursday 02 April 2026 00:48:54 +0000 (0:00:00.706) 0:02:29.912 ******** 2026-04-02 00:52:21.602119 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-02 00:52:21.602124 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-02 00:52:21.602131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-02 00:52:21.602137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-02 00:52:21.602296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-02 00:52:21.602309 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-02 00:52:21.602316 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.602323 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-02 00:52:21.602330 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-02 00:52:21.602336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-02 00:52:21.602348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-02 00:52:21.602354 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.602360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-02 00:52:21.602366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-02 00:52:21.602371 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-02 00:52:21.602381 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-02 00:52:21.602386 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-02 00:52:21.602393 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.602398 | orchestrator | 2026-04-02 00:52:21.602404 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-04-02 00:52:21.602410 | orchestrator | Thursday 02 April 2026 00:48:55 +0000 (0:00:00.999) 0:02:30.912 ******** 2026-04-02 00:52:21.602416 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:52:21.602422 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:52:21.602428 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:52:21.602433 | orchestrator | 2026-04-02 00:52:21.602439 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-04-02 00:52:21.602444 | orchestrator | Thursday 02 April 2026 00:48:56 +0000 (0:00:01.616) 0:02:32.528 ******** 2026-04-02 00:52:21.602450 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:52:21.602456 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:52:21.602461 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:52:21.602467 | orchestrator | 2026-04-02 00:52:21.602474 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-04-02 00:52:21.602480 | orchestrator | Thursday 02 April 2026 00:48:58 +0000 (0:00:02.136) 0:02:34.665 ******** 2026-04-02 00:52:21.602486 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.602493 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.602499 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.602505 | orchestrator | 2026-04-02 00:52:21.602510 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-04-02 00:52:21.602515 | orchestrator | Thursday 02 April 2026 00:48:59 +0000 (0:00:00.295) 0:02:34.961 ******** 2026-04-02 00:52:21.602525 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.602532 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.602540 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.602545 | orchestrator | 2026-04-02 00:52:21.602556 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-04-02 00:52:21.602562 | orchestrator | Thursday 02 April 2026 00:48:59 +0000 (0:00:00.285) 0:02:35.247 ******** 2026-04-02 00:52:21.602568 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 00:52:21.602574 | orchestrator | 2026-04-02 00:52:21.602579 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-04-02 00:52:21.602604 | orchestrator | Thursday 02 April 2026 00:49:00 +0000 (0:00:01.098) 0:02:36.345 ******** 2026-04-02 00:52:21.602611 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-02 00:52:21.602620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-02 00:52:21.602680 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-02 00:52:21.602691 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-02 00:52:21.602699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-02 00:52:21.602708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-02 00:52:21.602773 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-02 00:52:21.602779 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-02 00:52:21.602787 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-02 00:52:21.602791 | orchestrator | 2026-04-02 00:52:21.602795 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-04-02 00:52:21.602799 | orchestrator | Thursday 02 April 2026 00:49:04 +0000 (0:00:03.597) 0:02:39.942 ******** 2026-04-02 00:52:21.602806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-02 00:52:21.602841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-02 00:52:21.602846 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-02 00:52:21.602850 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.602854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-02 00:52:21.602861 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-02 00:52:21.602865 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-02 00:52:21.602872 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.602882 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-02 00:52:21.602887 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-02 00:52:21.602891 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-02 00:52:21.602895 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.602899 | orchestrator | 2026-04-02 00:52:21.602903 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-04-02 00:52:21.602906 | orchestrator | Thursday 02 April 2026 00:49:04 +0000 (0:00:00.541) 0:02:40.484 ******** 2026-04-02 00:52:21.602911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-02 00:52:21.602915 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-02 00:52:21.602920 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.602926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-02 00:52:21.602930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-02 00:52:21.602938 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.602943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-02 00:52:21.602947 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-02 00:52:21.602952 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.602956 | orchestrator | 2026-04-02 00:52:21.602961 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-04-02 00:52:21.602965 | orchestrator | Thursday 02 April 2026 00:49:05 +0000 (0:00:00.860) 0:02:41.344 ******** 2026-04-02 00:52:21.602969 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:52:21.602974 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:52:21.602978 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:52:21.602982 | orchestrator | 2026-04-02 00:52:21.602987 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-04-02 00:52:21.602994 | orchestrator | Thursday 02 April 2026 00:49:06 +0000 (0:00:01.238) 0:02:42.582 ******** 2026-04-02 00:52:21.602999 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:52:21.603003 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:52:21.603008 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:52:21.603012 | orchestrator | 2026-04-02 00:52:21.603016 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-04-02 00:52:21.603021 | orchestrator | Thursday 02 April 2026 00:49:08 +0000 (0:00:01.882) 0:02:44.464 ******** 2026-04-02 00:52:21.603025 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.603029 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.603033 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.603038 | orchestrator | 2026-04-02 00:52:21.603042 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-04-02 00:52:21.603047 | orchestrator | Thursday 02 April 2026 00:49:08 +0000 (0:00:00.272) 0:02:44.737 ******** 2026-04-02 00:52:21.603051 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 00:52:21.603055 | orchestrator | 2026-04-02 00:52:21.603059 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-04-02 00:52:21.603063 | orchestrator | Thursday 02 April 2026 00:49:10 +0000 (0:00:01.187) 0:02:45.925 ******** 2026-04-02 00:52:21.603068 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-02 00:52:21.603074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.603085 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-02 00:52:21.603090 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.603097 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-02 00:52:21.603101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.603106 | orchestrator | 2026-04-02 00:52:21.603110 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-04-02 00:52:21.603115 | orchestrator | Thursday 02 April 2026 00:49:13 +0000 (0:00:03.251) 0:02:49.176 ******** 2026-04-02 00:52:21.603119 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-02 00:52:21.603131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.603136 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.603143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-02 00:52:21.603148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.603152 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.603157 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-02 00:52:21.603164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.603169 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.603173 | orchestrator | 2026-04-02 00:52:21.603199 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-04-02 00:52:21.603204 | orchestrator | Thursday 02 April 2026 00:49:13 +0000 (0:00:00.568) 0:02:49.745 ******** 2026-04-02 00:52:21.603209 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-04-02 00:52:21.603214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-04-02 00:52:21.603219 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.603223 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-04-02 00:52:21.603227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-04-02 00:52:21.603232 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.603236 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-04-02 00:52:21.603269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-04-02 00:52:21.603276 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.603284 | orchestrator | 2026-04-02 00:52:21.603293 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-04-02 00:52:21.603301 | orchestrator | Thursday 02 April 2026 00:49:14 +0000 (0:00:00.879) 0:02:50.625 ******** 2026-04-02 00:52:21.603306 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:52:21.603313 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:52:21.603319 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:52:21.603364 | orchestrator | 2026-04-02 00:52:21.603370 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-04-02 00:52:21.603376 | orchestrator | Thursday 02 April 2026 00:49:16 +0000 (0:00:01.271) 0:02:51.896 ******** 2026-04-02 00:52:21.603382 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:52:21.603388 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:52:21.603394 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:52:21.603399 | orchestrator | 2026-04-02 00:52:21.603405 | orchestrator | TASK [include_role : manila] *************************************************** 2026-04-02 00:52:21.603411 | orchestrator | Thursday 02 April 2026 00:49:18 +0000 (0:00:02.098) 0:02:53.995 ******** 2026-04-02 00:52:21.603423 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 00:52:21.603429 | orchestrator | 2026-04-02 00:52:21.603435 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-04-02 00:52:21.603440 | orchestrator | Thursday 02 April 2026 00:49:19 +0000 (0:00:01.007) 0:02:55.002 ******** 2026-04-02 00:52:21.603447 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-02 00:52:21.603454 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.603466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.603473 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-02 00:52:21.603484 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.603491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.603501 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-02 00:52:21.603507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.603518 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.603525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.603535 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.603541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.603945 | orchestrator | 2026-04-02 00:52:21.603962 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-04-02 00:52:21.603966 | orchestrator | Thursday 02 April 2026 00:49:22 +0000 (0:00:03.402) 0:02:58.405 ******** 2026-04-02 00:52:21.603971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-02 00:52:21.603975 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.603986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.603990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.603994 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.604002 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-02 00:52:21.604013 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.604017 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.604021 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.604025 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.604033 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-02 00:52:21.604037 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.604044 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.604094 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.604099 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.604103 | orchestrator | 2026-04-02 00:52:21.604107 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-04-02 00:52:21.604111 | orchestrator | Thursday 02 April 2026 00:49:23 +0000 (0:00:00.752) 0:02:59.157 ******** 2026-04-02 00:52:21.604115 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-04-02 00:52:21.604119 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-04-02 00:52:21.604123 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.604127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-04-02 00:52:21.604131 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-04-02 00:52:21.604156 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.604160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-04-02 00:52:21.604164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-04-02 00:52:21.604168 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.604172 | orchestrator | 2026-04-02 00:52:21.604403 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-04-02 00:52:21.604409 | orchestrator | Thursday 02 April 2026 00:49:24 +0000 (0:00:00.817) 0:02:59.974 ******** 2026-04-02 00:52:21.604425 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:52:21.604430 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:52:21.604434 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:52:21.604437 | orchestrator | 2026-04-02 00:52:21.604441 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-04-02 00:52:21.604445 | orchestrator | Thursday 02 April 2026 00:49:25 +0000 (0:00:01.226) 0:03:01.201 ******** 2026-04-02 00:52:21.604449 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:52:21.604453 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:52:21.604457 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:52:21.604460 | orchestrator | 2026-04-02 00:52:21.604464 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-04-02 00:52:21.604476 | orchestrator | Thursday 02 April 2026 00:49:27 +0000 (0:00:01.838) 0:03:03.039 ******** 2026-04-02 00:52:21.604480 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 00:52:21.604484 | orchestrator | 2026-04-02 00:52:21.604488 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-04-02 00:52:21.604492 | orchestrator | Thursday 02 April 2026 00:49:28 +0000 (0:00:01.131) 0:03:04.171 ******** 2026-04-02 00:52:21.604496 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-02 00:52:21.604500 | orchestrator | 2026-04-02 00:52:21.604504 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-04-02 00:52:21.604508 | orchestrator | Thursday 02 April 2026 00:49:31 +0000 (0:00:02.816) 0:03:06.987 ******** 2026-04-02 00:52:21.604517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-02 00:52:21.604522 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-02 00:52:21.604526 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.604542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-02 00:52:21.604555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-02 00:52:21.604559 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-02 00:52:21.604564 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.604571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-02 00:52:21.604580 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.604584 | orchestrator | 2026-04-02 00:52:21.604588 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-04-02 00:52:21.604592 | orchestrator | Thursday 02 April 2026 00:49:33 +0000 (0:00:02.019) 0:03:09.007 ******** 2026-04-02 00:52:21.604599 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-02 00:52:21.604603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-02 00:52:21.604607 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.604614 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-02 00:52:21.604622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-02 00:52:21.604626 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.604633 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-02 00:52:21.604637 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-02 00:52:21.604641 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.604645 | orchestrator | 2026-04-02 00:52:21.604652 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-04-02 00:52:21.604656 | orchestrator | Thursday 02 April 2026 00:49:35 +0000 (0:00:02.528) 0:03:11.535 ******** 2026-04-02 00:52:21.604662 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-02 00:52:21.604667 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-02 00:52:21.604671 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.604678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-02 00:52:21.604682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-02 00:52:21.604686 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.604690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-02 00:52:21.604694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-02 00:52:21.604754 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.604759 | orchestrator | 2026-04-02 00:52:21.604763 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-04-02 00:52:21.604767 | orchestrator | Thursday 02 April 2026 00:49:37 +0000 (0:00:02.063) 0:03:13.599 ******** 2026-04-02 00:52:21.604791 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:52:21.604795 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:52:21.604823 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:52:21.604828 | orchestrator | 2026-04-02 00:52:21.604832 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-04-02 00:52:21.604835 | orchestrator | Thursday 02 April 2026 00:49:39 +0000 (0:00:02.022) 0:03:15.622 ******** 2026-04-02 00:52:21.604839 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.604843 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.604847 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.604851 | orchestrator | 2026-04-02 00:52:21.604855 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-04-02 00:52:21.604858 | orchestrator | Thursday 02 April 2026 00:49:41 +0000 (0:00:01.521) 0:03:17.143 ******** 2026-04-02 00:52:21.604865 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.604869 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.604873 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.604899 | orchestrator | 2026-04-02 00:52:21.604905 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-04-02 00:52:21.604908 | orchestrator | Thursday 02 April 2026 00:49:41 +0000 (0:00:00.286) 0:03:17.430 ******** 2026-04-02 00:52:21.604912 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 00:52:21.604916 | orchestrator | 2026-04-02 00:52:21.604920 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-04-02 00:52:21.604923 | orchestrator | Thursday 02 April 2026 00:49:42 +0000 (0:00:01.307) 0:03:18.737 ******** 2026-04-02 00:52:21.604928 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-02 00:52:21.604935 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-02 00:52:21.604939 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-02 00:52:21.604947 | orchestrator | 2026-04-02 00:52:21.604951 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-04-02 00:52:21.604955 | orchestrator | Thursday 02 April 2026 00:49:44 +0000 (0:00:01.312) 0:03:20.050 ******** 2026-04-02 00:52:21.604959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-02 00:52:21.604966 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-02 00:52:21.604971 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.604974 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.604978 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-02 00:52:21.604982 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.604986 | orchestrator | 2026-04-02 00:52:21.604990 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-04-02 00:52:21.604994 | orchestrator | Thursday 02 April 2026 00:49:44 +0000 (0:00:00.345) 0:03:20.395 ******** 2026-04-02 00:52:21.605000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-02 00:52:21.605004 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.605008 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-02 00:52:21.605012 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.605016 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-02 00:52:21.605023 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.605027 | orchestrator | 2026-04-02 00:52:21.605032 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-04-02 00:52:21.605037 | orchestrator | Thursday 02 April 2026 00:49:45 +0000 (0:00:00.761) 0:03:21.156 ******** 2026-04-02 00:52:21.605041 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.605046 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.605050 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.605055 | orchestrator | 2026-04-02 00:52:21.605059 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-04-02 00:52:21.605064 | orchestrator | Thursday 02 April 2026 00:49:45 +0000 (0:00:00.352) 0:03:21.509 ******** 2026-04-02 00:52:21.605068 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.605073 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.605077 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.605081 | orchestrator | 2026-04-02 00:52:21.605086 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-04-02 00:52:21.605090 | orchestrator | Thursday 02 April 2026 00:49:46 +0000 (0:00:01.106) 0:03:22.615 ******** 2026-04-02 00:52:21.605094 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.605099 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.605103 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.605107 | orchestrator | 2026-04-02 00:52:21.605112 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-04-02 00:52:21.605116 | orchestrator | Thursday 02 April 2026 00:49:47 +0000 (0:00:00.267) 0:03:22.883 ******** 2026-04-02 00:52:21.605120 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 00:52:21.605125 | orchestrator | 2026-04-02 00:52:21.605129 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-04-02 00:52:21.605134 | orchestrator | Thursday 02 April 2026 00:49:48 +0000 (0:00:01.295) 0:03:24.179 ******** 2026-04-02 00:52:21.605142 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-02 00:52:21.605147 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.605157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.605164 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.605169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-02 00:52:21.605173 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.605198 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-02 00:52:21.605206 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-02 00:52:21.605216 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.605226 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-02 00:52:21.605232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.605238 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-02 00:52:21.605245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-02 00:52:21.605255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.605265 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-02 00:52:21.605276 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-02 00:52:21.605280 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-02 00:52:21.605284 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.605293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.605297 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.605339 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-02 00:52:21.605344 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.605348 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-02 00:52:21.605353 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-02 00:52:21.605359 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-02 00:52:21.605364 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.605374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.605501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-02 00:52:21.605507 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.605511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.605518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.605526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-02 00:52:21.605531 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-02 00:52:21.605535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-02 00:52:21.605539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.605543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.605547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-02 00:52:21.605569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-02 00:52:21.605579 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-02 00:52:21.605583 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-02 00:52:21.605587 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.605591 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-02 00:52:21.605598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.605606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-02 00:52:21.605610 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-02 00:52:21.605618 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.605623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-02 00:52:21.605627 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-02 00:52:21.605631 | orchestrator | 2026-04-02 00:52:21.605634 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-04-02 00:52:21.605638 | orchestrator | Thursday 02 April 2026 00:49:52 +0000 (0:00:04.068) 0:03:28.248 ******** 2026-04-02 00:52:21.605646 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-02 00:52:21.605653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.605659 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.605664 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.605668 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-02 00:52:21.605672 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.605681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-02 00:52:21.605686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-02 00:52:21.605692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.605696 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-02 00:52:21.605700 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.605704 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-02 00:52:21.605713 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_g2026-04-02 00:52:21 | INFO  | Task bae6ffe3-bc43-4ada-bbec-72486e2eabb4 is in state STARTED 2026-04-02 00:52:21.605718 | orchestrator | roups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-02 00:52:21.605723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.605730 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-02 00:52:21.605734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.605738 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.605742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.605751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-02 00:52:21.605758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-02 00:52:21.605762 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-02 00:52:21.605766 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.605770 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.605774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-02 00:52:21.605781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-02 00:52:21.605787 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.605791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-02 00:52:21.605797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.605802 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-02 00:52:21.605849 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-02 00:52:21.605858 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.605865 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-02 00:52:21.605869 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.605875 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.605879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.605883 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-02 00:52:21.605924 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-02 00:52:21.605930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-02 00:52:21.605934 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.605938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.605985 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-02 00:52:21.605991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-02 00:52:21.605995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.606004 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-02 00:52:21.606044 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.606051 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-02 00:52:21.606055 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-02 00:52:21.606062 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.606066 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-02 00:52:21.606073 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-02 00:52:21.606078 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.606081 | orchestrator | 2026-04-02 00:52:21.606085 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-04-02 00:52:21.606089 | orchestrator | Thursday 02 April 2026 00:49:54 +0000 (0:00:01.650) 0:03:29.898 ******** 2026-04-02 00:52:21.606093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-04-02 00:52:21.606100 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-04-02 00:52:21.606105 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.606108 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-04-02 00:52:21.606112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-04-02 00:52:21.606116 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.606120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-04-02 00:52:21.606124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-04-02 00:52:21.606127 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.606131 | orchestrator | 2026-04-02 00:52:21.606135 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-04-02 00:52:21.606140 | orchestrator | Thursday 02 April 2026 00:49:55 +0000 (0:00:01.320) 0:03:31.219 ******** 2026-04-02 00:52:21.606144 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:52:21.606148 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:52:21.606153 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:52:21.606157 | orchestrator | 2026-04-02 00:52:21.606161 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-04-02 00:52:21.606169 | orchestrator | Thursday 02 April 2026 00:49:56 +0000 (0:00:01.271) 0:03:32.490 ******** 2026-04-02 00:52:21.606173 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:52:21.606194 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:52:21.606199 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:52:21.606203 | orchestrator | 2026-04-02 00:52:21.606207 | orchestrator | TASK [include_role : placement] ************************************************ 2026-04-02 00:52:21.606212 | orchestrator | Thursday 02 April 2026 00:49:58 +0000 (0:00:01.846) 0:03:34.337 ******** 2026-04-02 00:52:21.606221 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 00:52:21.606226 | orchestrator | 2026-04-02 00:52:21.606230 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-04-02 00:52:21.606235 | orchestrator | Thursday 02 April 2026 00:49:59 +0000 (0:00:01.253) 0:03:35.590 ******** 2026-04-02 00:52:21.606240 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-02 00:52:21.606245 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-02 00:52:21.606253 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-02 00:52:21.606258 | orchestrator | 2026-04-02 00:52:21.606262 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-04-02 00:52:21.606267 | orchestrator | Thursday 02 April 2026 00:50:02 +0000 (0:00:02.986) 0:03:38.577 ******** 2026-04-02 00:52:21.606275 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-02 00:52:21.606283 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.606288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-02 00:52:21.606293 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.606298 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-02 00:52:21.606302 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.606307 | orchestrator | 2026-04-02 00:52:21.606311 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-04-02 00:52:21.606316 | orchestrator | Thursday 02 April 2026 00:50:03 +0000 (0:00:00.437) 0:03:39.015 ******** 2026-04-02 00:52:21.606320 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-02 00:52:21.606326 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-02 00:52:21.606331 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.606335 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placemen2026-04-02 00:52:21 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:52:21.606804 | orchestrator | t_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-02 00:52:21.606828 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-02 00:52:21.606833 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.606837 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-02 00:52:21.606848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-02 00:52:21.606852 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.606856 | orchestrator | 2026-04-02 00:52:21.606860 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-04-02 00:52:21.606864 | orchestrator | Thursday 02 April 2026 00:50:04 +0000 (0:00:01.012) 0:03:40.027 ******** 2026-04-02 00:52:21.606868 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:52:21.606875 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:52:21.606879 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:52:21.606882 | orchestrator | 2026-04-02 00:52:21.606886 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-04-02 00:52:21.606890 | orchestrator | Thursday 02 April 2026 00:50:05 +0000 (0:00:01.351) 0:03:41.378 ******** 2026-04-02 00:52:21.606894 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:52:21.606897 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:52:21.606901 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:52:21.606905 | orchestrator | 2026-04-02 00:52:21.606909 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-04-02 00:52:21.606912 | orchestrator | Thursday 02 April 2026 00:50:07 +0000 (0:00:02.081) 0:03:43.460 ******** 2026-04-02 00:52:21.606916 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 00:52:21.606920 | orchestrator | 2026-04-02 00:52:21.606924 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-04-02 00:52:21.606927 | orchestrator | Thursday 02 April 2026 00:50:08 +0000 (0:00:01.255) 0:03:44.715 ******** 2026-04-02 00:52:21.606932 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-02 00:52:21.606942 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-02 00:52:21.606951 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-02 00:52:21.607009 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.607018 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.607024 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.607031 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.607042 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.607054 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.607060 | orchestrator | 2026-04-02 00:52:21.607066 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-04-02 00:52:21.607072 | orchestrator | Thursday 02 April 2026 00:50:12 +0000 (0:00:03.728) 0:03:48.444 ******** 2026-04-02 00:52:21.607343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-02 00:52:21.607352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.607356 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.607360 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.607369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-02 00:52:21.607382 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.607386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.607390 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.607394 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-02 00:52:21.607399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.607410 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.607414 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.607418 | orchestrator | 2026-04-02 00:52:21.607422 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-04-02 00:52:21.607426 | orchestrator | Thursday 02 April 2026 00:50:13 +0000 (0:00:00.637) 0:03:49.081 ******** 2026-04-02 00:52:21.607430 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-02 00:52:21.607435 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-02 00:52:21.607440 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-02 00:52:21.607446 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-02 00:52:21.607451 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.607455 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-02 00:52:21.607459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-02 00:52:21.607463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-02 00:52:21.607467 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-02 00:52:21.607471 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.607474 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-02 00:52:21.607478 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-02 00:52:21.607482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-02 00:52:21.607489 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-02 00:52:21.607493 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.607497 | orchestrator | 2026-04-02 00:52:21.607501 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-04-02 00:52:21.607505 | orchestrator | Thursday 02 April 2026 00:50:14 +0000 (0:00:00.867) 0:03:49.948 ******** 2026-04-02 00:52:21.607509 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:52:21.607512 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:52:21.607516 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:52:21.607520 | orchestrator | 2026-04-02 00:52:21.607524 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-04-02 00:52:21.607527 | orchestrator | Thursday 02 April 2026 00:50:15 +0000 (0:00:01.838) 0:03:51.786 ******** 2026-04-02 00:52:21.607531 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:52:21.607585 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:52:21.607589 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:52:21.607593 | orchestrator | 2026-04-02 00:52:21.607597 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-04-02 00:52:21.607601 | orchestrator | Thursday 02 April 2026 00:50:18 +0000 (0:00:02.117) 0:03:53.904 ******** 2026-04-02 00:52:21.607605 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 00:52:21.607608 | orchestrator | 2026-04-02 00:52:21.607612 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-04-02 00:52:21.607619 | orchestrator | Thursday 02 April 2026 00:50:19 +0000 (0:00:01.311) 0:03:55.216 ******** 2026-04-02 00:52:21.607623 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-04-02 00:52:21.607628 | orchestrator | 2026-04-02 00:52:21.607632 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-04-02 00:52:21.607635 | orchestrator | Thursday 02 April 2026 00:50:20 +0000 (0:00:01.368) 0:03:56.584 ******** 2026-04-02 00:52:21.607639 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-02 00:52:21.607647 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-02 00:52:21.607651 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-02 00:52:21.607655 | orchestrator | 2026-04-02 00:52:21.607659 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-04-02 00:52:21.607667 | orchestrator | Thursday 02 April 2026 00:50:24 +0000 (0:00:03.536) 0:04:00.121 ******** 2026-04-02 00:52:21.607671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-02 00:52:21.607675 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.607744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-02 00:52:21.607749 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.607753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-02 00:52:21.607757 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.607761 | orchestrator | 2026-04-02 00:52:21.607765 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-04-02 00:52:21.607769 | orchestrator | Thursday 02 April 2026 00:50:25 +0000 (0:00:01.173) 0:04:01.294 ******** 2026-04-02 00:52:21.607775 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-02 00:52:21.607779 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-02 00:52:21.607784 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.607788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-02 00:52:21.607792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-02 00:52:21.607796 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.607800 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-02 00:52:21.607806 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-02 00:52:21.607810 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.607814 | orchestrator | 2026-04-02 00:52:21.607818 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-02 00:52:21.607827 | orchestrator | Thursday 02 April 2026 00:50:26 +0000 (0:00:01.510) 0:04:02.804 ******** 2026-04-02 00:52:21.607831 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:52:21.607835 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:52:21.607839 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:52:21.607843 | orchestrator | 2026-04-02 00:52:21.607847 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-02 00:52:21.607852 | orchestrator | Thursday 02 April 2026 00:50:29 +0000 (0:00:02.408) 0:04:05.213 ******** 2026-04-02 00:52:21.607858 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:52:21.607864 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:52:21.607870 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:52:21.607875 | orchestrator | 2026-04-02 00:52:21.607881 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-04-02 00:52:21.607886 | orchestrator | Thursday 02 April 2026 00:50:32 +0000 (0:00:02.629) 0:04:07.842 ******** 2026-04-02 00:52:21.607893 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-04-02 00:52:21.607899 | orchestrator | 2026-04-02 00:52:21.607905 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-04-02 00:52:21.607910 | orchestrator | Thursday 02 April 2026 00:50:32 +0000 (0:00:00.723) 0:04:08.566 ******** 2026-04-02 00:52:21.607916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-02 00:52:21.607923 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.607928 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-02 00:52:21.607935 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.607944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-02 00:52:21.607950 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.607956 | orchestrator | 2026-04-02 00:52:21.607962 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-04-02 00:52:21.607968 | orchestrator | Thursday 02 April 2026 00:50:33 +0000 (0:00:01.103) 0:04:09.669 ******** 2026-04-02 00:52:21.607976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-02 00:52:21.607987 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.607996 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-02 00:52:21.608002 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.608007 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-02 00:52:21.608011 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.608016 | orchestrator | 2026-04-02 00:52:21.608020 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-04-02 00:52:21.608025 | orchestrator | Thursday 02 April 2026 00:50:35 +0000 (0:00:01.301) 0:04:10.970 ******** 2026-04-02 00:52:21.608029 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.608034 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.608038 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.608042 | orchestrator | 2026-04-02 00:52:21.608047 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-02 00:52:21.608051 | orchestrator | Thursday 02 April 2026 00:50:36 +0000 (0:00:01.078) 0:04:12.049 ******** 2026-04-02 00:52:21.608055 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:52:21.608060 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:52:21.608064 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:52:21.608069 | orchestrator | 2026-04-02 00:52:21.608073 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-02 00:52:21.608077 | orchestrator | Thursday 02 April 2026 00:50:38 +0000 (0:00:02.193) 0:04:14.243 ******** 2026-04-02 00:52:21.608082 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:52:21.608086 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:52:21.608091 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:52:21.608095 | orchestrator | 2026-04-02 00:52:21.608099 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-04-02 00:52:21.608104 | orchestrator | Thursday 02 April 2026 00:50:41 +0000 (0:00:02.611) 0:04:16.854 ******** 2026-04-02 00:52:21.608108 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-04-02 00:52:21.608113 | orchestrator | 2026-04-02 00:52:21.608119 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-04-02 00:52:21.608125 | orchestrator | Thursday 02 April 2026 00:50:41 +0000 (0:00:00.709) 0:04:17.563 ******** 2026-04-02 00:52:21.608131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-02 00:52:21.608139 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.608153 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backen2026-04-02 00:52:21 | INFO  | Task 908cb2cd-27b5-489c-b37b-649eef45bfd2 is in state STARTED 2026-04-02 00:52:21.608161 | orchestrator | d_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-02 00:52:21.608167 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.608189 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-02 00:52:21.608196 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.608201 | orchestrator | 2026-04-02 00:52:21.608206 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-04-02 00:52:21.608216 | orchestrator | Thursday 02 April 2026 00:50:42 +0000 (0:00:01.144) 0:04:18.708 ******** 2026-04-02 00:52:21.608343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-02 00:52:21.608349 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.608353 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-02 00:52:21.608357 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.608361 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-02 00:52:21.608365 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.608368 | orchestrator | 2026-04-02 00:52:21.608372 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-04-02 00:52:21.608376 | orchestrator | Thursday 02 April 2026 00:50:43 +0000 (0:00:01.061) 0:04:19.770 ******** 2026-04-02 00:52:21.608380 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.608384 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.608388 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.608391 | orchestrator | 2026-04-02 00:52:21.608400 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-02 00:52:21.608404 | orchestrator | Thursday 02 April 2026 00:50:45 +0000 (0:00:01.349) 0:04:21.119 ******** 2026-04-02 00:52:21.608407 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:52:21.608411 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:52:21.608415 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:52:21.608419 | orchestrator | 2026-04-02 00:52:21.608423 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-02 00:52:21.608427 | orchestrator | Thursday 02 April 2026 00:50:47 +0000 (0:00:02.261) 0:04:23.381 ******** 2026-04-02 00:52:21.608430 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:52:21.608434 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:52:21.608438 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:52:21.608442 | orchestrator | 2026-04-02 00:52:21.608542 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-04-02 00:52:21.608548 | orchestrator | Thursday 02 April 2026 00:50:50 +0000 (0:00:02.840) 0:04:26.222 ******** 2026-04-02 00:52:21.608554 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 00:52:21.608560 | orchestrator | 2026-04-02 00:52:21.608566 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-04-02 00:52:21.608590 | orchestrator | Thursday 02 April 2026 00:50:51 +0000 (0:00:01.191) 0:04:27.414 ******** 2026-04-02 00:52:21.608596 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-02 00:52:21.608606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-02 00:52:21.608610 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-02 00:52:21.608615 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-02 00:52:21.608623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.608630 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-02 00:52:21.608634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-02 00:52:21.608641 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-02 00:52:21.608645 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-02 00:52:21.608649 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-02 00:52:21.608656 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.608660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-02 00:52:21.608668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-02 00:52:21.608672 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-02 00:52:21.608678 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.608682 | orchestrator | 2026-04-02 00:52:21.608686 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-04-02 00:52:21.608690 | orchestrator | Thursday 02 April 2026 00:50:54 +0000 (0:00:03.166) 0:04:30.580 ******** 2026-04-02 00:52:21.608695 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-02 00:52:21.608702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-02 00:52:21.608706 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-02 00:52:21.608713 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-02 00:52:21.608717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.608721 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.608728 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-02 00:52:21.608732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-02 00:52:21.608738 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-02 00:52:21.608742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-02 00:52:21.608749 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.608753 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.608757 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-02 00:52:21.608763 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-02 00:52:21.608767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-02 00:52:21.608776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-02 00:52:21.608780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-02 00:52:21.608784 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.608788 | orchestrator | 2026-04-02 00:52:21.608792 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-04-02 00:52:21.608795 | orchestrator | Thursday 02 April 2026 00:50:55 +0000 (0:00:00.884) 0:04:31.464 ******** 2026-04-02 00:52:21.608800 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-02 00:52:21.608804 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-02 00:52:21.608808 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.608814 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-02 00:52:21.608818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-02 00:52:21.608822 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.608826 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-02 00:52:21.608830 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-02 00:52:21.608834 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.608837 | orchestrator | 2026-04-02 00:52:21.608841 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-04-02 00:52:21.608845 | orchestrator | Thursday 02 April 2026 00:50:56 +0000 (0:00:00.839) 0:04:32.304 ******** 2026-04-02 00:52:21.608851 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:52:21.608858 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:52:21.608861 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:52:21.608865 | orchestrator | 2026-04-02 00:52:21.608869 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-04-02 00:52:21.608873 | orchestrator | Thursday 02 April 2026 00:50:57 +0000 (0:00:01.425) 0:04:33.729 ******** 2026-04-02 00:52:21.608877 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:52:21.608880 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:52:21.608884 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:52:21.608888 | orchestrator | 2026-04-02 00:52:21.608892 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-04-02 00:52:21.608895 | orchestrator | Thursday 02 April 2026 00:51:00 +0000 (0:00:02.197) 0:04:35.926 ******** 2026-04-02 00:52:21.608899 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 00:52:21.608903 | orchestrator | 2026-04-02 00:52:21.608908 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-04-02 00:52:21.608914 | orchestrator | Thursday 02 April 2026 00:51:01 +0000 (0:00:01.605) 0:04:37.532 ******** 2026-04-02 00:52:21.608919 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-02 00:52:21.608927 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-02 00:52:21.608936 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-02 00:52:21.608944 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-02 00:52:21.608956 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-02 00:52:21.609009 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-02 00:52:21.609050 | orchestrator | 2026-04-02 00:52:21.609056 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-04-02 00:52:21.609062 | orchestrator | Thursday 02 April 2026 00:51:06 +0000 (0:00:05.173) 0:04:42.706 ******** 2026-04-02 00:52:21.609436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 2026-04-02 00:52:21 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:52:21.609475 | orchestrator | 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-02 00:52:21.609496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-02 00:52:21.609504 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.609541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-02 00:52:21.609550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-02 00:52:21.609557 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.609573 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-02 00:52:21.609589 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-02 00:52:21.609596 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.609602 | orchestrator | 2026-04-02 00:52:21.609609 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-04-02 00:52:21.609616 | orchestrator | Thursday 02 April 2026 00:51:07 +0000 (0:00:00.802) 0:04:43.508 ******** 2026-04-02 00:52:21.609623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-04-02 00:52:21.609679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-02 00:52:21.609688 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-02 00:52:21.609696 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.609702 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-04-02 00:52:21.609709 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-02 00:52:21.609715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-02 00:52:21.609722 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.609728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-04-02 00:52:21.609735 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-02 00:52:21.609741 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-02 00:52:21.609748 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.609754 | orchestrator | 2026-04-02 00:52:21.609761 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-04-02 00:52:21.609849 | orchestrator | Thursday 02 April 2026 00:51:08 +0000 (0:00:01.072) 0:04:44.581 ******** 2026-04-02 00:52:21.609857 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.609863 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.609870 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.609876 | orchestrator | 2026-04-02 00:52:21.609882 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-04-02 00:52:21.609904 | orchestrator | Thursday 02 April 2026 00:51:09 +0000 (0:00:00.386) 0:04:44.967 ******** 2026-04-02 00:52:21.609912 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.609918 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.609925 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.609931 | orchestrator | 2026-04-02 00:52:21.609937 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-04-02 00:52:21.609944 | orchestrator | Thursday 02 April 2026 00:51:10 +0000 (0:00:01.122) 0:04:46.090 ******** 2026-04-02 00:52:21.609950 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 00:52:21.609957 | orchestrator | 2026-04-02 00:52:21.609963 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-04-02 00:52:21.609969 | orchestrator | Thursday 02 April 2026 00:51:11 +0000 (0:00:01.561) 0:04:47.651 ******** 2026-04-02 00:52:21.609980 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-02 00:52:21.609988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-02 00:52:21.609996 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-02 00:52:21.610003 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-02 00:52:21.610040 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-02 00:52:21.610068 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-02 00:52:21.610076 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-02 00:52:21.610086 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-02 00:52:21.610093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-02 00:52:21.610100 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-02 00:52:21.610107 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-02 00:52:21.610113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-02 00:52:21.610126 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-02 00:52:21.610146 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-02 00:52:21.610153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-02 00:52:21.610163 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-02 00:52:21.610170 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-02 00:52:21.610218 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-02 00:52:21.610235 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-02 00:52:21.610242 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-02 00:52:21.610253 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-02 00:52:21.610260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-02 00:52:21.610266 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-02 00:52:21.610278 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-02 00:52:21.610289 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-02 00:52:21.610296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-02 00:52:21.610309 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-02 00:52:21.610316 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-02 00:52:21.610322 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-02 00:52:21.610329 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-02 00:52:21.610340 | orchestrator | 2026-04-02 00:52:21.610346 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-04-02 00:52:21.610353 | orchestrator | Thursday 02 April 2026 00:51:15 +0000 (0:00:03.619) 0:04:51.271 ******** 2026-04-02 00:52:21.610359 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-04-02 00:52:21.610369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-02 00:52:21.610378 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-02 00:52:21.610382 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-02 00:52:21.610389 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-02 00:52:21.610394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-04-02 00:52:21.610403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-02 00:52:21.610407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-02 00:52:21.610414 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-02 00:52:21.610418 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-02 00:52:21.610422 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.610429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-04-02 00:52:21.610434 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-02 00:52:21.610444 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-02 00:52:21.610450 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-02 00:52:21.610458 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-02 00:52:21.610471 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-04-02 00:52:21.610481 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-04-02 00:52:21.610488 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-02 00:52:21.610498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-02 00:52:21.610505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-02 00:52:21.610511 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-02 00:52:21.610522 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-02 00:52:21.610528 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-02 00:52:21.610538 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-02 00:52:21.610544 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-02 00:52:21.610557 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.610564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-04-02 00:52:21.610572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-02 00:52:21.610582 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-02 00:52:21.610590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-02 00:52:21.610596 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-02 00:52:21.610607 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.610613 | orchestrator | 2026-04-02 00:52:21.610619 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-04-02 00:52:21.610629 | orchestrator | Thursday 02 April 2026 00:51:16 +0000 (0:00:00.721) 0:04:51.993 ******** 2026-04-02 00:52:21.610637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-04-02 00:52:21.610652 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-04-02 00:52:21.610660 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-02 00:52:21.610666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-02 00:52:21.610671 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.610676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-04-02 00:52:21.610680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-04-02 00:52:21.610685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-02 00:52:21.610690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-04-02 00:52:21.610696 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-02 00:52:21.610703 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.610709 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-04-02 00:52:21.610715 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-02 00:52:21.610726 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-02 00:52:21.610734 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.610740 | orchestrator | 2026-04-02 00:52:21.610747 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-04-02 00:52:21.610753 | orchestrator | Thursday 02 April 2026 00:51:17 +0000 (0:00:01.022) 0:04:53.015 ******** 2026-04-02 00:52:21.610759 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.610766 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.610772 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.610779 | orchestrator | 2026-04-02 00:52:21.610785 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-04-02 00:52:21.610797 | orchestrator | Thursday 02 April 2026 00:51:17 +0000 (0:00:00.400) 0:04:53.416 ******** 2026-04-02 00:52:21.610801 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.610805 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.610808 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.610812 | orchestrator | 2026-04-02 00:52:21.610816 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-04-02 00:52:21.610820 | orchestrator | Thursday 02 April 2026 00:51:18 +0000 (0:00:01.195) 0:04:54.611 ******** 2026-04-02 00:52:21.610828 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 00:52:21.610834 | orchestrator | 2026-04-02 00:52:21.610840 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-04-02 00:52:21.610845 | orchestrator | Thursday 02 April 2026 00:51:20 +0000 (0:00:01.331) 0:04:55.943 ******** 2026-04-02 00:52:21.610851 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-02 00:52:21.610858 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-02 00:52:21.610868 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-02 00:52:21.610876 | orchestrator | 2026-04-02 00:52:21.610882 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-04-02 00:52:21.610893 | orchestrator | Thursday 02 April 2026 00:51:22 +0000 (0:00:02.442) 0:04:58.386 ******** 2026-04-02 00:52:21.610906 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-02 00:52:21.610913 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.610919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-02 00:52:21.610925 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.610929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-02 00:52:21.610933 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.610936 | orchestrator | 2026-04-02 00:52:21.610940 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-04-02 00:52:21.610944 | orchestrator | Thursday 02 April 2026 00:51:22 +0000 (0:00:00.361) 0:04:58.747 ******** 2026-04-02 00:52:21.610948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-02 00:52:21.610952 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.610956 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-02 00:52:21.610963 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.610967 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-02 00:52:21.610973 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.610977 | orchestrator | 2026-04-02 00:52:21.610981 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-04-02 00:52:21.610984 | orchestrator | Thursday 02 April 2026 00:51:23 +0000 (0:00:00.567) 0:04:59.315 ******** 2026-04-02 00:52:21.610988 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.610992 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.610996 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.610999 | orchestrator | 2026-04-02 00:52:21.611003 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-04-02 00:52:21.611007 | orchestrator | Thursday 02 April 2026 00:51:24 +0000 (0:00:00.617) 0:04:59.932 ******** 2026-04-02 00:52:21.611011 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.611014 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.611018 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.611022 | orchestrator | 2026-04-02 00:52:21.611025 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-04-02 00:52:21.611029 | orchestrator | Thursday 02 April 2026 00:51:25 +0000 (0:00:01.115) 0:05:01.048 ******** 2026-04-02 00:52:21.611033 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 00:52:21.611037 | orchestrator | 2026-04-02 00:52:21.611041 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-04-02 00:52:21.611044 | orchestrator | Thursday 02 April 2026 00:51:26 +0000 (0:00:01.335) 0:05:02.383 ******** 2026-04-02 00:52:21.611051 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-02 00:52:21.611055 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-02 00:52:21.611060 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-02 00:52:21.611070 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-02 00:52:21.611077 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-02 00:52:21.611081 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-02 00:52:21.611085 | orchestrator | 2026-04-02 00:52:21.611089 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-04-02 00:52:21.611093 | orchestrator | Thursday 02 April 2026 00:51:32 +0000 (0:00:05.600) 0:05:07.984 ******** 2026-04-02 00:52:21.611099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-04-02 00:52:21.611117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-04-02 00:52:21.611126 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.611135 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-04-02 00:52:21.611142 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-04-02 00:52:21.611148 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.611155 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-04-02 00:52:21.611171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-04-02 00:52:21.611193 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.611201 | orchestrator | 2026-04-02 00:52:21.611205 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-04-02 00:52:21.611209 | orchestrator | Thursday 02 April 2026 00:51:33 +0000 (0:00:00.895) 0:05:08.880 ******** 2026-04-02 00:52:21.611212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-02 00:52:21.611217 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-02 00:52:21.611221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-02 00:52:21.611227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-02 00:52:21.611231 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.611235 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-02 00:52:21.611239 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-02 00:52:21.611243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-02 00:52:21.611247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-02 00:52:21.611251 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.611254 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-02 00:52:21.611262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-02 00:52:21.611266 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-02 00:52:21.611269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-02 00:52:21.611273 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.611277 | orchestrator | 2026-04-02 00:52:21.611281 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-04-02 00:52:21.611285 | orchestrator | Thursday 02 April 2026 00:51:34 +0000 (0:00:00.966) 0:05:09.846 ******** 2026-04-02 00:52:21.611289 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:52:21.611292 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:52:21.611296 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:52:21.611300 | orchestrator | 2026-04-02 00:52:21.611304 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-04-02 00:52:21.611307 | orchestrator | Thursday 02 April 2026 00:51:35 +0000 (0:00:01.363) 0:05:11.210 ******** 2026-04-02 00:52:21.611311 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:52:21.611315 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:52:21.611319 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:52:21.611322 | orchestrator | 2026-04-02 00:52:21.611326 | orchestrator | TASK [include_role : swift] **************************************************** 2026-04-02 00:52:21.611330 | orchestrator | Thursday 02 April 2026 00:51:37 +0000 (0:00:02.352) 0:05:13.562 ******** 2026-04-02 00:52:21.611334 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.611338 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.611341 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.611345 | orchestrator | 2026-04-02 00:52:21.611349 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-04-02 00:52:21.611353 | orchestrator | Thursday 02 April 2026 00:51:38 +0000 (0:00:00.610) 0:05:14.173 ******** 2026-04-02 00:52:21.611356 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.611363 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.611367 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.611371 | orchestrator | 2026-04-02 00:52:21.611375 | orchestrator | TASK [include_role : trove] **************************************************** 2026-04-02 00:52:21.611378 | orchestrator | Thursday 02 April 2026 00:51:38 +0000 (0:00:00.309) 0:05:14.483 ******** 2026-04-02 00:52:21.611382 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.611386 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.611390 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.611394 | orchestrator | 2026-04-02 00:52:21.611397 | orchestrator | TASK [include_role : venus] **************************************************** 2026-04-02 00:52:21.611401 | orchestrator | Thursday 02 April 2026 00:51:38 +0000 (0:00:00.342) 0:05:14.826 ******** 2026-04-02 00:52:21.611406 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.611412 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.611418 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.611423 | orchestrator | 2026-04-02 00:52:21.611429 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-04-02 00:52:21.611435 | orchestrator | Thursday 02 April 2026 00:51:39 +0000 (0:00:00.320) 0:05:15.146 ******** 2026-04-02 00:52:21.611440 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.611446 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.611451 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.611456 | orchestrator | 2026-04-02 00:52:21.611462 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-04-02 00:52:21.611473 | orchestrator | Thursday 02 April 2026 00:51:39 +0000 (0:00:00.568) 0:05:15.714 ******** 2026-04-02 00:52:21.611479 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.611489 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.611495 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.611501 | orchestrator | 2026-04-02 00:52:21.611507 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-04-02 00:52:21.611513 | orchestrator | Thursday 02 April 2026 00:51:40 +0000 (0:00:00.584) 0:05:16.299 ******** 2026-04-02 00:52:21.611520 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:52:21.611525 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:52:21.611529 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:52:21.611532 | orchestrator | 2026-04-02 00:52:21.611536 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-04-02 00:52:21.611540 | orchestrator | Thursday 02 April 2026 00:51:41 +0000 (0:00:00.654) 0:05:16.954 ******** 2026-04-02 00:52:21.611544 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:52:21.611547 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:52:21.611551 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:52:21.611555 | orchestrator | 2026-04-02 00:52:21.611558 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-04-02 00:52:21.611562 | orchestrator | Thursday 02 April 2026 00:51:41 +0000 (0:00:00.627) 0:05:17.581 ******** 2026-04-02 00:52:21.611566 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:52:21.611570 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:52:21.611573 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:52:21.611577 | orchestrator | 2026-04-02 00:52:21.611581 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-04-02 00:52:21.611585 | orchestrator | Thursday 02 April 2026 00:51:42 +0000 (0:00:00.870) 0:05:18.452 ******** 2026-04-02 00:52:21.611588 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:52:21.611592 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:52:21.611596 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:52:21.611599 | orchestrator | 2026-04-02 00:52:21.611603 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-04-02 00:52:21.611607 | orchestrator | Thursday 02 April 2026 00:51:43 +0000 (0:00:00.947) 0:05:19.400 ******** 2026-04-02 00:52:21.611610 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:52:21.611614 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:52:21.611618 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:52:21.611621 | orchestrator | 2026-04-02 00:52:21.611625 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-04-02 00:52:21.611629 | orchestrator | Thursday 02 April 2026 00:51:44 +0000 (0:00:00.827) 0:05:20.227 ******** 2026-04-02 00:52:21.611633 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:52:21.611636 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:52:21.611640 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:52:21.611644 | orchestrator | 2026-04-02 00:52:21.611647 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-04-02 00:52:21.611651 | orchestrator | Thursday 02 April 2026 00:51:49 +0000 (0:00:05.038) 0:05:25.265 ******** 2026-04-02 00:52:21.611655 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:52:21.611658 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:52:21.611662 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:52:21.611666 | orchestrator | 2026-04-02 00:52:21.611670 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-04-02 00:52:21.611673 | orchestrator | Thursday 02 April 2026 00:51:52 +0000 (0:00:03.018) 0:05:28.284 ******** 2026-04-02 00:52:21.611677 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:52:21.611681 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:52:21.611684 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:52:21.611688 | orchestrator | 2026-04-02 00:52:21.611692 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-04-02 00:52:21.611696 | orchestrator | Thursday 02 April 2026 00:52:01 +0000 (0:00:09.009) 0:05:37.293 ******** 2026-04-02 00:52:21.611703 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:52:21.611707 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:52:21.611710 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:52:21.611714 | orchestrator | 2026-04-02 00:52:21.611718 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-04-02 00:52:21.611722 | orchestrator | Thursday 02 April 2026 00:52:05 +0000 (0:00:03.784) 0:05:41.077 ******** 2026-04-02 00:52:21.611725 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:52:21.611729 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:52:21.611733 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:52:21.611737 | orchestrator | 2026-04-02 00:52:21.611740 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-04-02 00:52:21.611744 | orchestrator | Thursday 02 April 2026 00:52:14 +0000 (0:00:09.160) 0:05:50.238 ******** 2026-04-02 00:52:21.611748 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.611752 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.611755 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.611759 | orchestrator | 2026-04-02 00:52:21.611766 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-04-02 00:52:21.611770 | orchestrator | Thursday 02 April 2026 00:52:14 +0000 (0:00:00.526) 0:05:50.764 ******** 2026-04-02 00:52:21.611773 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.611777 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.611781 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.611785 | orchestrator | 2026-04-02 00:52:21.611788 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-04-02 00:52:21.611792 | orchestrator | Thursday 02 April 2026 00:52:15 +0000 (0:00:00.305) 0:05:51.069 ******** 2026-04-02 00:52:21.611796 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.611799 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.611803 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.611807 | orchestrator | 2026-04-02 00:52:21.611811 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-04-02 00:52:21.611815 | orchestrator | Thursday 02 April 2026 00:52:15 +0000 (0:00:00.301) 0:05:51.371 ******** 2026-04-02 00:52:21.611818 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.611822 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.611826 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.611829 | orchestrator | 2026-04-02 00:52:21.611833 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-04-02 00:52:21.611837 | orchestrator | Thursday 02 April 2026 00:52:15 +0000 (0:00:00.299) 0:05:51.670 ******** 2026-04-02 00:52:21.611841 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.611844 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.611848 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.611852 | orchestrator | 2026-04-02 00:52:21.611858 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-04-02 00:52:21.611862 | orchestrator | Thursday 02 April 2026 00:52:16 +0000 (0:00:00.520) 0:05:52.191 ******** 2026-04-02 00:52:21.611866 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:52:21.611869 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:52:21.611873 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:52:21.611877 | orchestrator | 2026-04-02 00:52:21.611881 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-04-02 00:52:21.611884 | orchestrator | Thursday 02 April 2026 00:52:16 +0000 (0:00:00.317) 0:05:52.508 ******** 2026-04-02 00:52:21.611888 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:52:21.611892 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:52:21.611896 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:52:21.611899 | orchestrator | 2026-04-02 00:52:21.611903 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-04-02 00:52:21.611907 | orchestrator | Thursday 02 April 2026 00:52:17 +0000 (0:00:00.862) 0:05:53.371 ******** 2026-04-02 00:52:21.611911 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:52:21.611918 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:52:21.611922 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:52:21.611925 | orchestrator | 2026-04-02 00:52:21.611929 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-02 00:52:21.611933 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-04-02 00:52:21.611938 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-04-02 00:52:21.611941 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-04-02 00:52:21.611945 | orchestrator | 2026-04-02 00:52:21.611949 | orchestrator | 2026-04-02 00:52:21.611953 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-02 00:52:21.611957 | orchestrator | Thursday 02 April 2026 00:52:18 +0000 (0:00:00.806) 0:05:54.178 ******** 2026-04-02 00:52:21.611960 | orchestrator | =============================================================================== 2026-04-02 00:52:21.611964 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 9.16s 2026-04-02 00:52:21.611968 | orchestrator | loadbalancer : Start backup proxysql container -------------------------- 9.01s 2026-04-02 00:52:21.611972 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 5.60s 2026-04-02 00:52:21.611975 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.17s 2026-04-02 00:52:21.611979 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 5.16s 2026-04-02 00:52:21.611983 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 5.06s 2026-04-02 00:52:21.611987 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 5.05s 2026-04-02 00:52:21.611990 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 5.04s 2026-04-02 00:52:21.611994 | orchestrator | sysctl : Setting sysctl values ------------------------------------------ 4.55s 2026-04-02 00:52:21.611998 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 4.30s 2026-04-02 00:52:21.612002 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.07s 2026-04-02 00:52:21.612005 | orchestrator | haproxy-config : Configuring firewall for glance ------------------------ 3.84s 2026-04-02 00:52:21.612009 | orchestrator | loadbalancer : Wait for backup proxysql to start ------------------------ 3.78s 2026-04-02 00:52:21.612013 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 3.75s 2026-04-02 00:52:21.612016 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 3.73s 2026-04-02 00:52:21.612020 | orchestrator | haproxy-config : Copying over grafana haproxy config -------------------- 3.73s 2026-04-02 00:52:21.612024 | orchestrator | service-cert-copy : loadbalancer | Copying over extra CA certificates --- 3.68s 2026-04-02 00:52:21.612029 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 3.62s 2026-04-02 00:52:21.612033 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 3.60s 2026-04-02 00:52:21.612037 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 3.54s 2026-04-02 00:52:24.640302 | orchestrator | 2026-04-02 00:52:24 | INFO  | Task bae6ffe3-bc43-4ada-bbec-72486e2eabb4 is in state STARTED 2026-04-02 00:52:24.640403 | orchestrator | 2026-04-02 00:52:24 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:52:24.641493 | orchestrator | 2026-04-02 00:52:24 | INFO  | Task 908cb2cd-27b5-489c-b37b-649eef45bfd2 is in state STARTED 2026-04-02 00:52:24.641525 | orchestrator | 2026-04-02 00:52:24 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:52:27.677522 | orchestrator | 2026-04-02 00:52:27 | INFO  | Task bae6ffe3-bc43-4ada-bbec-72486e2eabb4 is in state STARTED 2026-04-02 00:52:27.679415 | orchestrator | 2026-04-02 00:52:27 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:52:27.680277 | orchestrator | 2026-04-02 00:52:27 | INFO  | Task 908cb2cd-27b5-489c-b37b-649eef45bfd2 is in state STARTED 2026-04-02 00:52:27.680312 | orchestrator | 2026-04-02 00:52:27 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:52:30.723934 | orchestrator | 2026-04-02 00:52:30 | INFO  | Task bae6ffe3-bc43-4ada-bbec-72486e2eabb4 is in state STARTED 2026-04-02 00:52:30.724007 | orchestrator | 2026-04-02 00:52:30 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:52:30.724601 | orchestrator | 2026-04-02 00:52:30 | INFO  | Task 908cb2cd-27b5-489c-b37b-649eef45bfd2 is in state STARTED 2026-04-02 00:52:30.724619 | orchestrator | 2026-04-02 00:52:30 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:52:33.773400 | orchestrator | 2026-04-02 00:52:33 | INFO  | Task bae6ffe3-bc43-4ada-bbec-72486e2eabb4 is in state STARTED 2026-04-02 00:52:33.774004 | orchestrator | 2026-04-02 00:52:33 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:52:33.775056 | orchestrator | 2026-04-02 00:52:33 | INFO  | Task 908cb2cd-27b5-489c-b37b-649eef45bfd2 is in state STARTED 2026-04-02 00:52:33.775112 | orchestrator | 2026-04-02 00:52:33 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:52:36.815141 | orchestrator | 2026-04-02 00:52:36 | INFO  | Task bae6ffe3-bc43-4ada-bbec-72486e2eabb4 is in state STARTED 2026-04-02 00:52:36.817795 | orchestrator | 2026-04-02 00:52:36 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:52:36.820690 | orchestrator | 2026-04-02 00:52:36 | INFO  | Task 908cb2cd-27b5-489c-b37b-649eef45bfd2 is in state STARTED 2026-04-02 00:52:36.821031 | orchestrator | 2026-04-02 00:52:36 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:52:39.877028 | orchestrator | 2026-04-02 00:52:39 | INFO  | Task bae6ffe3-bc43-4ada-bbec-72486e2eabb4 is in state STARTED 2026-04-02 00:52:39.878886 | orchestrator | 2026-04-02 00:52:39 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:52:39.879883 | orchestrator | 2026-04-02 00:52:39 | INFO  | Task 908cb2cd-27b5-489c-b37b-649eef45bfd2 is in state STARTED 2026-04-02 00:52:39.879919 | orchestrator | 2026-04-02 00:52:39 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:52:42.912032 | orchestrator | 2026-04-02 00:52:42 | INFO  | Task bae6ffe3-bc43-4ada-bbec-72486e2eabb4 is in state STARTED 2026-04-02 00:52:42.912371 | orchestrator | 2026-04-02 00:52:42 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:52:42.913080 | orchestrator | 2026-04-02 00:52:42 | INFO  | Task 908cb2cd-27b5-489c-b37b-649eef45bfd2 is in state STARTED 2026-04-02 00:52:42.913149 | orchestrator | 2026-04-02 00:52:42 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:52:45.948901 | orchestrator | 2026-04-02 00:52:45 | INFO  | Task bae6ffe3-bc43-4ada-bbec-72486e2eabb4 is in state STARTED 2026-04-02 00:52:45.949522 | orchestrator | 2026-04-02 00:52:45 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:52:45.950884 | orchestrator | 2026-04-02 00:52:45 | INFO  | Task 908cb2cd-27b5-489c-b37b-649eef45bfd2 is in state STARTED 2026-04-02 00:52:45.951096 | orchestrator | 2026-04-02 00:52:45 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:52:49.001809 | orchestrator | 2026-04-02 00:52:49 | INFO  | Task bae6ffe3-bc43-4ada-bbec-72486e2eabb4 is in state STARTED 2026-04-02 00:52:49.003029 | orchestrator | 2026-04-02 00:52:49 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:52:49.003131 | orchestrator | 2026-04-02 00:52:49 | INFO  | Task 908cb2cd-27b5-489c-b37b-649eef45bfd2 is in state STARTED 2026-04-02 00:52:49.003142 | orchestrator | 2026-04-02 00:52:49 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:52:52.048857 | orchestrator | 2026-04-02 00:52:52 | INFO  | Task bae6ffe3-bc43-4ada-bbec-72486e2eabb4 is in state STARTED 2026-04-02 00:52:52.051808 | orchestrator | 2026-04-02 00:52:52 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:52:52.055222 | orchestrator | 2026-04-02 00:52:52 | INFO  | Task 908cb2cd-27b5-489c-b37b-649eef45bfd2 is in state STARTED 2026-04-02 00:52:52.055295 | orchestrator | 2026-04-02 00:52:52 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:52:55.097850 | orchestrator | 2026-04-02 00:52:55 | INFO  | Task bae6ffe3-bc43-4ada-bbec-72486e2eabb4 is in state STARTED 2026-04-02 00:52:55.101907 | orchestrator | 2026-04-02 00:52:55 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:52:55.105197 | orchestrator | 2026-04-02 00:52:55 | INFO  | Task 908cb2cd-27b5-489c-b37b-649eef45bfd2 is in state STARTED 2026-04-02 00:52:55.105266 | orchestrator | 2026-04-02 00:52:55 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:52:58.156728 | orchestrator | 2026-04-02 00:52:58 | INFO  | Task bae6ffe3-bc43-4ada-bbec-72486e2eabb4 is in state STARTED 2026-04-02 00:52:58.159265 | orchestrator | 2026-04-02 00:52:58 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:52:58.161456 | orchestrator | 2026-04-02 00:52:58 | INFO  | Task 908cb2cd-27b5-489c-b37b-649eef45bfd2 is in state STARTED 2026-04-02 00:52:58.161507 | orchestrator | 2026-04-02 00:52:58 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:53:01.213268 | orchestrator | 2026-04-02 00:53:01 | INFO  | Task bae6ffe3-bc43-4ada-bbec-72486e2eabb4 is in state STARTED 2026-04-02 00:53:01.214941 | orchestrator | 2026-04-02 00:53:01 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:53:01.216600 | orchestrator | 2026-04-02 00:53:01 | INFO  | Task 908cb2cd-27b5-489c-b37b-649eef45bfd2 is in state STARTED 2026-04-02 00:53:01.216654 | orchestrator | 2026-04-02 00:53:01 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:53:04.248782 | orchestrator | 2026-04-02 00:53:04 | INFO  | Task bae6ffe3-bc43-4ada-bbec-72486e2eabb4 is in state STARTED 2026-04-02 00:53:04.249901 | orchestrator | 2026-04-02 00:53:04 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:53:04.250798 | orchestrator | 2026-04-02 00:53:04 | INFO  | Task 908cb2cd-27b5-489c-b37b-649eef45bfd2 is in state STARTED 2026-04-02 00:53:04.250833 | orchestrator | 2026-04-02 00:53:04 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:53:07.285030 | orchestrator | 2026-04-02 00:53:07 | INFO  | Task bae6ffe3-bc43-4ada-bbec-72486e2eabb4 is in state STARTED 2026-04-02 00:53:07.286887 | orchestrator | 2026-04-02 00:53:07 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:53:07.287962 | orchestrator | 2026-04-02 00:53:07 | INFO  | Task 908cb2cd-27b5-489c-b37b-649eef45bfd2 is in state STARTED 2026-04-02 00:53:07.288094 | orchestrator | 2026-04-02 00:53:07 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:53:10.320321 | orchestrator | 2026-04-02 00:53:10 | INFO  | Task bae6ffe3-bc43-4ada-bbec-72486e2eabb4 is in state STARTED 2026-04-02 00:53:10.320962 | orchestrator | 2026-04-02 00:53:10 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:53:10.399811 | orchestrator | 2026-04-02 00:53:10 | INFO  | Task 908cb2cd-27b5-489c-b37b-649eef45bfd2 is in state STARTED 2026-04-02 00:53:10.399880 | orchestrator | 2026-04-02 00:53:10 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:53:13.372432 | orchestrator | 2026-04-02 00:53:13 | INFO  | Task bae6ffe3-bc43-4ada-bbec-72486e2eabb4 is in state STARTED 2026-04-02 00:53:13.374233 | orchestrator | 2026-04-02 00:53:13 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:53:13.376434 | orchestrator | 2026-04-02 00:53:13 | INFO  | Task 908cb2cd-27b5-489c-b37b-649eef45bfd2 is in state STARTED 2026-04-02 00:53:13.376486 | orchestrator | 2026-04-02 00:53:13 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:53:16.410217 | orchestrator | 2026-04-02 00:53:16 | INFO  | Task bae6ffe3-bc43-4ada-bbec-72486e2eabb4 is in state STARTED 2026-04-02 00:53:16.412567 | orchestrator | 2026-04-02 00:53:16 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:53:16.415342 | orchestrator | 2026-04-02 00:53:16 | INFO  | Task 908cb2cd-27b5-489c-b37b-649eef45bfd2 is in state STARTED 2026-04-02 00:53:16.415537 | orchestrator | 2026-04-02 00:53:16 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:53:19.459039 | orchestrator | 2026-04-02 00:53:19 | INFO  | Task bae6ffe3-bc43-4ada-bbec-72486e2eabb4 is in state STARTED 2026-04-02 00:53:19.459887 | orchestrator | 2026-04-02 00:53:19 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:53:19.461222 | orchestrator | 2026-04-02 00:53:19 | INFO  | Task 908cb2cd-27b5-489c-b37b-649eef45bfd2 is in state STARTED 2026-04-02 00:53:19.461268 | orchestrator | 2026-04-02 00:53:19 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:53:22.507453 | orchestrator | 2026-04-02 00:53:22 | INFO  | Task bae6ffe3-bc43-4ada-bbec-72486e2eabb4 is in state STARTED 2026-04-02 00:53:22.508782 | orchestrator | 2026-04-02 00:53:22 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:53:22.510297 | orchestrator | 2026-04-02 00:53:22 | INFO  | Task 908cb2cd-27b5-489c-b37b-649eef45bfd2 is in state STARTED 2026-04-02 00:53:22.510364 | orchestrator | 2026-04-02 00:53:22 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:53:25.556599 | orchestrator | 2026-04-02 00:53:25 | INFO  | Task bae6ffe3-bc43-4ada-bbec-72486e2eabb4 is in state STARTED 2026-04-02 00:53:25.558354 | orchestrator | 2026-04-02 00:53:25 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:53:25.559809 | orchestrator | 2026-04-02 00:53:25 | INFO  | Task 908cb2cd-27b5-489c-b37b-649eef45bfd2 is in state STARTED 2026-04-02 00:53:25.559864 | orchestrator | 2026-04-02 00:53:25 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:53:28.612572 | orchestrator | 2026-04-02 00:53:28 | INFO  | Task bae6ffe3-bc43-4ada-bbec-72486e2eabb4 is in state STARTED 2026-04-02 00:53:28.614282 | orchestrator | 2026-04-02 00:53:28 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:53:28.616279 | orchestrator | 2026-04-02 00:53:28 | INFO  | Task 908cb2cd-27b5-489c-b37b-649eef45bfd2 is in state STARTED 2026-04-02 00:53:28.616409 | orchestrator | 2026-04-02 00:53:28 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:53:31.655789 | orchestrator | 2026-04-02 00:53:31 | INFO  | Task bae6ffe3-bc43-4ada-bbec-72486e2eabb4 is in state STARTED 2026-04-02 00:53:31.658475 | orchestrator | 2026-04-02 00:53:31 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:53:31.660072 | orchestrator | 2026-04-02 00:53:31 | INFO  | Task 908cb2cd-27b5-489c-b37b-649eef45bfd2 is in state STARTED 2026-04-02 00:53:31.660440 | orchestrator | 2026-04-02 00:53:31 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:53:34.686076 | orchestrator | 2026-04-02 00:53:34 | INFO  | Task bae6ffe3-bc43-4ada-bbec-72486e2eabb4 is in state STARTED 2026-04-02 00:53:34.687094 | orchestrator | 2026-04-02 00:53:34 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:53:34.688583 | orchestrator | 2026-04-02 00:53:34 | INFO  | Task 908cb2cd-27b5-489c-b37b-649eef45bfd2 is in state STARTED 2026-04-02 00:53:34.688633 | orchestrator | 2026-04-02 00:53:34 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:53:37.730882 | orchestrator | 2026-04-02 00:53:37 | INFO  | Task bae6ffe3-bc43-4ada-bbec-72486e2eabb4 is in state STARTED 2026-04-02 00:53:37.732985 | orchestrator | 2026-04-02 00:53:37 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:53:37.736018 | orchestrator | 2026-04-02 00:53:37 | INFO  | Task 908cb2cd-27b5-489c-b37b-649eef45bfd2 is in state STARTED 2026-04-02 00:53:37.736076 | orchestrator | 2026-04-02 00:53:37 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:53:40.796340 | orchestrator | 2026-04-02 00:53:40 | INFO  | Task bae6ffe3-bc43-4ada-bbec-72486e2eabb4 is in state STARTED 2026-04-02 00:53:40.798108 | orchestrator | 2026-04-02 00:53:40 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:53:40.800053 | orchestrator | 2026-04-02 00:53:40 | INFO  | Task 908cb2cd-27b5-489c-b37b-649eef45bfd2 is in state STARTED 2026-04-02 00:53:40.800183 | orchestrator | 2026-04-02 00:53:40 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:53:43.855101 | orchestrator | 2026-04-02 00:53:43 | INFO  | Task bae6ffe3-bc43-4ada-bbec-72486e2eabb4 is in state STARTED 2026-04-02 00:53:43.857332 | orchestrator | 2026-04-02 00:53:43 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:53:43.860070 | orchestrator | 2026-04-02 00:53:43 | INFO  | Task 908cb2cd-27b5-489c-b37b-649eef45bfd2 is in state STARTED 2026-04-02 00:53:43.860127 | orchestrator | 2026-04-02 00:53:43 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:53:46.904406 | orchestrator | 2026-04-02 00:53:46 | INFO  | Task bae6ffe3-bc43-4ada-bbec-72486e2eabb4 is in state STARTED 2026-04-02 00:53:46.905566 | orchestrator | 2026-04-02 00:53:46 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:53:46.907754 | orchestrator | 2026-04-02 00:53:46 | INFO  | Task 908cb2cd-27b5-489c-b37b-649eef45bfd2 is in state STARTED 2026-04-02 00:53:46.907810 | orchestrator | 2026-04-02 00:53:46 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:53:49.968156 | orchestrator | 2026-04-02 00:53:49 | INFO  | Task bae6ffe3-bc43-4ada-bbec-72486e2eabb4 is in state STARTED 2026-04-02 00:53:49.969050 | orchestrator | 2026-04-02 00:53:49 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:53:49.970206 | orchestrator | 2026-04-02 00:53:49 | INFO  | Task 908cb2cd-27b5-489c-b37b-649eef45bfd2 is in state STARTED 2026-04-02 00:53:49.970294 | orchestrator | 2026-04-02 00:53:49 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:53:53.018304 | orchestrator | 2026-04-02 00:53:53 | INFO  | Task bae6ffe3-bc43-4ada-bbec-72486e2eabb4 is in state STARTED 2026-04-02 00:53:53.020248 | orchestrator | 2026-04-02 00:53:53 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:53:53.021451 | orchestrator | 2026-04-02 00:53:53 | INFO  | Task 908cb2cd-27b5-489c-b37b-649eef45bfd2 is in state STARTED 2026-04-02 00:53:53.021503 | orchestrator | 2026-04-02 00:53:53 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:53:56.077024 | orchestrator | 2026-04-02 00:53:56 | INFO  | Task bae6ffe3-bc43-4ada-bbec-72486e2eabb4 is in state STARTED 2026-04-02 00:53:56.078741 | orchestrator | 2026-04-02 00:53:56 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:53:56.080711 | orchestrator | 2026-04-02 00:53:56 | INFO  | Task 908cb2cd-27b5-489c-b37b-649eef45bfd2 is in state STARTED 2026-04-02 00:53:56.080887 | orchestrator | 2026-04-02 00:53:56 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:53:59.128049 | orchestrator | 2026-04-02 00:53:59 | INFO  | Task bae6ffe3-bc43-4ada-bbec-72486e2eabb4 is in state STARTED 2026-04-02 00:53:59.129307 | orchestrator | 2026-04-02 00:53:59 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state STARTED 2026-04-02 00:53:59.130774 | orchestrator | 2026-04-02 00:53:59 | INFO  | Task 908cb2cd-27b5-489c-b37b-649eef45bfd2 is in state STARTED 2026-04-02 00:53:59.130993 | orchestrator | 2026-04-02 00:53:59 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:54:02.187090 | orchestrator | 2026-04-02 00:54:02 | INFO  | Task bae6ffe3-bc43-4ada-bbec-72486e2eabb4 is in state STARTED 2026-04-02 00:54:02.194192 | orchestrator | 2026-04-02 00:54:02 | INFO  | Task 9c86ac7e-26d3-40eb-b324-2bf98e033d53 is in state SUCCESS 2026-04-02 00:54:02.196119 | orchestrator | 2026-04-02 00:54:02.196187 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-02 00:54:02.196196 | orchestrator | 2.16.14 2026-04-02 00:54:02.196204 | orchestrator | 2026-04-02 00:54:02.196211 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-04-02 00:54:02.196219 | orchestrator | 2026-04-02 00:54:02.196226 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-02 00:54:02.196234 | orchestrator | Thursday 02 April 2026 00:44:05 +0000 (0:00:00.738) 0:00:00.738 ******** 2026-04-02 00:54:02.196242 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 00:54:02.196250 | orchestrator | 2026-04-02 00:54:02.196257 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-02 00:54:02.196265 | orchestrator | Thursday 02 April 2026 00:44:06 +0000 (0:00:01.178) 0:00:01.917 ******** 2026-04-02 00:54:02.196273 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:54:02.196281 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:54:02.196288 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:54:02.196317 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:54:02.196324 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:54:02.196331 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:54:02.196338 | orchestrator | 2026-04-02 00:54:02.196345 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-02 00:54:02.196352 | orchestrator | Thursday 02 April 2026 00:44:08 +0000 (0:00:02.046) 0:00:03.964 ******** 2026-04-02 00:54:02.196359 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:54:02.196366 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:54:02.196373 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:54:02.196379 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:54:02.196425 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:54:02.196435 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:54:02.196442 | orchestrator | 2026-04-02 00:54:02.196448 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-02 00:54:02.196455 | orchestrator | Thursday 02 April 2026 00:44:09 +0000 (0:00:00.669) 0:00:04.633 ******** 2026-04-02 00:54:02.196461 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:54:02.196467 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:54:02.196474 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:54:02.196480 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:54:02.196486 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:54:02.196563 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:54:02.196571 | orchestrator | 2026-04-02 00:54:02.196577 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-02 00:54:02.196584 | orchestrator | Thursday 02 April 2026 00:44:10 +0000 (0:00:00.950) 0:00:05.584 ******** 2026-04-02 00:54:02.196590 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:54:02.196597 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:54:02.196603 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:54:02.196798 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:54:02.196807 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:54:02.196814 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:54:02.196850 | orchestrator | 2026-04-02 00:54:02.196857 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-02 00:54:02.196864 | orchestrator | Thursday 02 April 2026 00:44:11 +0000 (0:00:01.243) 0:00:06.827 ******** 2026-04-02 00:54:02.196870 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:54:02.196877 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:54:02.196883 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:54:02.196889 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:54:02.196895 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:54:02.196901 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:54:02.196908 | orchestrator | 2026-04-02 00:54:02.196914 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-02 00:54:02.196920 | orchestrator | Thursday 02 April 2026 00:44:12 +0000 (0:00:01.044) 0:00:07.872 ******** 2026-04-02 00:54:02.196935 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:54:02.196942 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:54:02.196948 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:54:02.196954 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:54:02.196961 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:54:02.196967 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:54:02.196973 | orchestrator | 2026-04-02 00:54:02.196979 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-02 00:54:02.196986 | orchestrator | Thursday 02 April 2026 00:44:13 +0000 (0:00:01.258) 0:00:09.130 ******** 2026-04-02 00:54:02.196992 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.197044 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.197051 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.197058 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.197064 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.197070 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.197077 | orchestrator | 2026-04-02 00:54:02.197083 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-02 00:54:02.197089 | orchestrator | Thursday 02 April 2026 00:44:14 +0000 (0:00:00.900) 0:00:10.031 ******** 2026-04-02 00:54:02.197096 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:54:02.197102 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:54:02.197108 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:54:02.197114 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:54:02.197121 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:54:02.197127 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:54:02.197144 | orchestrator | 2026-04-02 00:54:02.197150 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-02 00:54:02.197156 | orchestrator | Thursday 02 April 2026 00:44:15 +0000 (0:00:01.191) 0:00:11.222 ******** 2026-04-02 00:54:02.197163 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-02 00:54:02.197167 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-02 00:54:02.197171 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-02 00:54:02.197175 | orchestrator | 2026-04-02 00:54:02.197179 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-02 00:54:02.197182 | orchestrator | Thursday 02 April 2026 00:44:16 +0000 (0:00:00.991) 0:00:12.214 ******** 2026-04-02 00:54:02.197186 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:54:02.197196 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:54:02.197200 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:54:02.197211 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:54:02.197215 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:54:02.197219 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:54:02.197223 | orchestrator | 2026-04-02 00:54:02.197343 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-02 00:54:02.197349 | orchestrator | Thursday 02 April 2026 00:44:18 +0000 (0:00:01.486) 0:00:13.700 ******** 2026-04-02 00:54:02.197353 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-02 00:54:02.197358 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-02 00:54:02.197364 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-02 00:54:02.197370 | orchestrator | 2026-04-02 00:54:02.197377 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-02 00:54:02.197383 | orchestrator | Thursday 02 April 2026 00:44:21 +0000 (0:00:02.683) 0:00:16.384 ******** 2026-04-02 00:54:02.197389 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-02 00:54:02.197396 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-02 00:54:02.197402 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-02 00:54:02.197409 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.197413 | orchestrator | 2026-04-02 00:54:02.197417 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-02 00:54:02.197421 | orchestrator | Thursday 02 April 2026 00:44:21 +0000 (0:00:00.548) 0:00:16.933 ******** 2026-04-02 00:54:02.197426 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-02 00:54:02.197432 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-02 00:54:02.197438 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-02 00:54:02.197445 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.197451 | orchestrator | 2026-04-02 00:54:02.197458 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-02 00:54:02.197464 | orchestrator | Thursday 02 April 2026 00:44:22 +0000 (0:00:01.136) 0:00:18.069 ******** 2026-04-02 00:54:02.197473 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-02 00:54:02.197491 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-02 00:54:02.197496 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-02 00:54:02.197504 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.197508 | orchestrator | 2026-04-02 00:54:02.197512 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-02 00:54:02.197516 | orchestrator | Thursday 02 April 2026 00:44:23 +0000 (0:00:00.178) 0:00:18.247 ******** 2026-04-02 00:54:02.197571 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-02 00:44:19.220679', 'end': '2026-04-02 00:44:19.332865', 'delta': '0:00:00.112186', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-02 00:54:02.197582 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-02 00:44:19.963386', 'end': '2026-04-02 00:44:20.060756', 'delta': '0:00:00.097370', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-02 00:54:02.197589 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-02 00:44:20.812781', 'end': '2026-04-02 00:44:20.905208', 'delta': '0:00:00.092427', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-02 00:54:02.197596 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.197604 | orchestrator | 2026-04-02 00:54:02.197611 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-02 00:54:02.197618 | orchestrator | Thursday 02 April 2026 00:44:23 +0000 (0:00:00.542) 0:00:18.790 ******** 2026-04-02 00:54:02.197625 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:54:02.197632 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:54:02.197638 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:54:02.197645 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:54:02.197652 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:54:02.197659 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:54:02.197666 | orchestrator | 2026-04-02 00:54:02.197672 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-02 00:54:02.197679 | orchestrator | Thursday 02 April 2026 00:44:25 +0000 (0:00:01.969) 0:00:20.760 ******** 2026-04-02 00:54:02.197686 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-02 00:54:02.197694 | orchestrator | 2026-04-02 00:54:02.197701 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-02 00:54:02.197707 | orchestrator | Thursday 02 April 2026 00:44:26 +0000 (0:00:01.026) 0:00:21.786 ******** 2026-04-02 00:54:02.197714 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.197723 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.197730 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.197734 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.197738 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.197742 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.197746 | orchestrator | 2026-04-02 00:54:02.197995 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-02 00:54:02.198004 | orchestrator | Thursday 02 April 2026 00:44:28 +0000 (0:00:01.531) 0:00:23.318 ******** 2026-04-02 00:54:02.198011 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.198045 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.198049 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.198053 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.198057 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.198060 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.198064 | orchestrator | 2026-04-02 00:54:02.198082 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-02 00:54:02.198086 | orchestrator | Thursday 02 April 2026 00:44:29 +0000 (0:00:01.216) 0:00:24.534 ******** 2026-04-02 00:54:02.198090 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.198097 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.198103 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.198109 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.198115 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.198121 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.198127 | orchestrator | 2026-04-02 00:54:02.198164 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-02 00:54:02.198170 | orchestrator | Thursday 02 April 2026 00:44:29 +0000 (0:00:00.669) 0:00:25.203 ******** 2026-04-02 00:54:02.198176 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.198182 | orchestrator | 2026-04-02 00:54:02.198188 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-02 00:54:02.198239 | orchestrator | Thursday 02 April 2026 00:44:30 +0000 (0:00:00.140) 0:00:25.344 ******** 2026-04-02 00:54:02.198248 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.198254 | orchestrator | 2026-04-02 00:54:02.198260 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-02 00:54:02.198267 | orchestrator | Thursday 02 April 2026 00:44:30 +0000 (0:00:00.235) 0:00:25.579 ******** 2026-04-02 00:54:02.198274 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.198280 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.198286 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.198311 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.198319 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.198325 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.198332 | orchestrator | 2026-04-02 00:54:02.198339 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-02 00:54:02.198345 | orchestrator | Thursday 02 April 2026 00:44:31 +0000 (0:00:00.743) 0:00:26.322 ******** 2026-04-02 00:54:02.198379 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.198387 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.198394 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.198401 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.198407 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.198414 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.198420 | orchestrator | 2026-04-02 00:54:02.198426 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-02 00:54:02.198432 | orchestrator | Thursday 02 April 2026 00:44:31 +0000 (0:00:00.746) 0:00:27.069 ******** 2026-04-02 00:54:02.198439 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.198445 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.198451 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.198457 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.198471 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.198477 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.198483 | orchestrator | 2026-04-02 00:54:02.198490 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-02 00:54:02.198496 | orchestrator | Thursday 02 April 2026 00:44:32 +0000 (0:00:00.906) 0:00:27.976 ******** 2026-04-02 00:54:02.198503 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.198509 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.198516 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.198522 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.198666 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.198674 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.198680 | orchestrator | 2026-04-02 00:54:02.198687 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-02 00:54:02.198693 | orchestrator | Thursday 02 April 2026 00:44:33 +0000 (0:00:00.826) 0:00:28.803 ******** 2026-04-02 00:54:02.198699 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.198705 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.198711 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.198718 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.198724 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.198730 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.199062 | orchestrator | 2026-04-02 00:54:02.199071 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-02 00:54:02.199078 | orchestrator | Thursday 02 April 2026 00:44:34 +0000 (0:00:00.499) 0:00:29.302 ******** 2026-04-02 00:54:02.199084 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.199091 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.199097 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.199104 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.199110 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.199117 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.199123 | orchestrator | 2026-04-02 00:54:02.199130 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-02 00:54:02.199149 | orchestrator | Thursday 02 April 2026 00:44:34 +0000 (0:00:00.672) 0:00:29.975 ******** 2026-04-02 00:54:02.199155 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.199161 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.199167 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.199173 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.199179 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.199240 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.199250 | orchestrator | 2026-04-02 00:54:02.199261 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-02 00:54:02.199268 | orchestrator | Thursday 02 April 2026 00:44:35 +0000 (0:00:00.724) 0:00:30.699 ******** 2026-04-02 00:54:02.199276 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3f9aa46c--6044--534e--8fed--f8e8e1b6cabb-osd--block--3f9aa46c--6044--534e--8fed--f8e8e1b6cabb', 'dm-uuid-LVM-1MTXoGF8o53qkDTSPtxC3aThD3vdY9e755qwrpVQd1mUdwCow4Ywk178cgvEkFc3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-02 00:54:02.199283 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c3a3e1f2--53da--5696--b7a3--d36d02964763-osd--block--c3a3e1f2--53da--5696--b7a3--d36d02964763', 'dm-uuid-LVM-hTQpxbX1AFLcmtQHNUWfcNukVXxPFxHQ3EK7GjgLiZfCRXz108x0HJCxENks5HKf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-02 00:54:02.199342 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-02 00:54:02.199354 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-02 00:54:02.199361 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-02 00:54:02.199368 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-02 00:54:02.199375 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-02 00:54:02.199382 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-02 00:54:02.199392 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-02 00:54:02.199398 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-02 00:54:02.199445 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f06db598-1059-4957-87c8-4c1fce10345d', 'scsi-SQEMU_QEMU_HARDDISK_f06db598-1059-4957-87c8-4c1fce10345d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f06db598-1059-4957-87c8-4c1fce10345d-part1', 'scsi-SQEMU_QEMU_HARDDISK_f06db598-1059-4957-87c8-4c1fce10345d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f06db598-1059-4957-87c8-4c1fce10345d-part14', 'scsi-SQEMU_QEMU_HARDDISK_f06db598-1059-4957-87c8-4c1fce10345d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f06db598-1059-4957-87c8-4c1fce10345d-part15', 'scsi-SQEMU_QEMU_HARDDISK_f06db598-1059-4957-87c8-4c1fce10345d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f06db598-1059-4957-87c8-4c1fce10345d-part16', 'scsi-SQEMU_QEMU_HARDDISK_f06db598-1059-4957-87c8-4c1fce10345d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-02 00:54:02.199460 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--3f9aa46c--6044--534e--8fed--f8e8e1b6cabb-osd--block--3f9aa46c--6044--534e--8fed--f8e8e1b6cabb'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-VZf4A8-Vl5Y-RfGE-02Wv-400i-5pCQ-Pd3NQz', 'scsi-0QEMU_QEMU_HARDDISK_2d38c850-3a2f-4695-a83c-0cf43f012ceb', 'scsi-SQEMU_QEMU_HARDDISK_2d38c850-3a2f-4695-a83c-0cf43f012ceb'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-02 00:54:02.199781 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--c3a3e1f2--53da--5696--b7a3--d36d02964763-osd--block--c3a3e1f2--53da--5696--b7a3--d36d02964763'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-77F7uO-Apmc-H24C-qSBW-Epdk-PXaJ-z2vjIe', 'scsi-0QEMU_QEMU_HARDDISK_a19da191-4981-42d2-9779-658e739bce45', 'scsi-SQEMU_QEMU_HARDDISK_a19da191-4981-42d2-9779-658e739bce45'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-02 00:54:02.199839 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e9e694a9-9d82-484d-8c29-c125fbbe1161', 'scsi-SQEMU_QEMU_HARDDISK_e9e694a9-9d82-484d-8c29-c125fbbe1161'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-02 00:54:02.199855 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--88a5a1a0--9236--5c9d--8025--e39ec03fb505-osd--block--88a5a1a0--9236--5c9d--8025--e39ec03fb505', 'dm-uuid-LVM-WMI3nYyBf7h4a35UZ2BgnO9vqcyxNCNvD7goC0THY81DlzhSjoy8A79FmmjpRb1D'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-02 00:54:02.199904 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-02-00-02-53-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-02 00:54:02.199914 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b27c5b00--4597--5124--934a--fd641c3feb65-osd--block--b27c5b00--4597--5124--934a--fd641c3feb65', 'dm-uuid-LVM-h3QAXOvc3sBuPMb0fptvx6xk5sLFRoS4xCd5UbEvm5kMw6J5pD02ABDp4W7c0Nb3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-02 00:54:02.199922 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-02 00:54:02.199929 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-02 00:54:02.199935 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-02 00:54:02.199945 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-02 00:54:02.199952 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-02 00:54:02.199963 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.200383 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-02 00:54:02.200395 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-02 00:54:02.200445 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-02 00:54:02.200456 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0032d447-591d-4b7c-93ad-b7b900e6d05d', 'scsi-SQEMU_QEMU_HARDDISK_0032d447-591d-4b7c-93ad-b7b900e6d05d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0032d447-591d-4b7c-93ad-b7b900e6d05d-part1', 'scsi-SQEMU_QEMU_HARDDISK_0032d447-591d-4b7c-93ad-b7b900e6d05d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0032d447-591d-4b7c-93ad-b7b900e6d05d-part14', 'scsi-SQEMU_QEMU_HARDDISK_0032d447-591d-4b7c-93ad-b7b900e6d05d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0032d447-591d-4b7c-93ad-b7b900e6d05d-part15', 'scsi-SQEMU_QEMU_HARDDISK_0032d447-591d-4b7c-93ad-b7b900e6d05d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0032d447-591d-4b7c-93ad-b7b900e6d05d-part16', 'scsi-SQEMU_QEMU_HARDDISK_0032d447-591d-4b7c-93ad-b7b900e6d05d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-02 00:54:02.200482 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--88a5a1a0--9236--5c9d--8025--e39ec03fb505-osd--block--88a5a1a0--9236--5c9d--8025--e39ec03fb505'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-cpchoi-yKuF-8aYI-GwtW-pIyk-I2rQ-1zQAUq', 'scsi-0QEMU_QEMU_HARDDISK_9b931cae-7a06-4f63-bca1-6514ca0f11b4', 'scsi-SQEMU_QEMU_HARDDISK_9b931cae-7a06-4f63-bca1-6514ca0f11b4'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-02 00:54:02.200497 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--b27c5b00--4597--5124--934a--fd641c3feb65-osd--block--b27c5b00--4597--5124--934a--fd641c3feb65'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-0MrAeq-GoNi-4r0V-mMLh-NwwX-eFcS-sZLAKx', 'scsi-0QEMU_QEMU_HARDDISK_5e29c7a5-f411-44d3-9f54-46e8ba073aaf', 'scsi-SQEMU_QEMU_HARDDISK_5e29c7a5-f411-44d3-9f54-46e8ba073aaf'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-02 00:54:02.200573 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ce3dc94c--dd22--5089--bd64--d73b3d29d8ba-osd--block--ce3dc94c--dd22--5089--bd64--d73b3d29d8ba', 'dm-uuid-LVM-G2WDd4XiPx9HORZHRtE3mgDAzOB6fs6NM2nHDmnmtMFr3pKoNoRhNZj7lvGLYpvi'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-02 00:54:02.200728 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_70d9d3d1-3bd9-44af-99af-3d2d8f65a3c3', 'scsi-SQEMU_QEMU_HARDDISK_70d9d3d1-3bd9-44af-99af-3d2d8f65a3c3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-02 00:54:02.200736 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-02-00-02-58-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-02 00:54:02.200743 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--bc329f0f--76ef--5b6a--a482--1349b51ce957-osd--block--bc329f0f--76ef--5b6a--a482--1349b51ce957', 'dm-uuid-LVM-sgOQhAC0fLYGMkqYeHhJJ16yOte7p9OEs0xJMNVB78tsOVrhNOvcp9WHftWoN43H'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-02 00:54:02.200754 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-02 00:54:02.200770 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-02 00:54:02.200778 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-02 00:54:02.200784 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-02 00:54:02.200841 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-02 00:54:02.200851 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.200858 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-02 00:54:02.200865 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-02 00:54:02.200871 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-02 00:54:02.200881 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_213be582-731a-41c2-8309-28c1726af439', 'scsi-SQEMU_QEMU_HARDDISK_213be582-731a-41c2-8309-28c1726af439'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_213be582-731a-41c2-8309-28c1726af439-part1', 'scsi-SQEMU_QEMU_HARDDISK_213be582-731a-41c2-8309-28c1726af439-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_213be582-731a-41c2-8309-28c1726af439-part14', 'scsi-SQEMU_QEMU_HARDDISK_213be582-731a-41c2-8309-28c1726af439-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_213be582-731a-41c2-8309-28c1726af439-part15', 'scsi-SQEMU_QEMU_HARDDISK_213be582-731a-41c2-8309-28c1726af439-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_213be582-731a-41c2-8309-28c1726af439-part16', 'scsi-SQEMU_QEMU_HARDDISK_213be582-731a-41c2-8309-28c1726af439-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-02 00:54:02.200933 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-02 00:54:02.200942 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--ce3dc94c--dd22--5089--bd64--d73b3d29d8ba-osd--block--ce3dc94c--dd22--5089--bd64--d73b3d29d8ba'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ytxEle-KSA8-0usH-5AQL-iyHO-U5AI-R1BFiA', 'scsi-0QEMU_QEMU_HARDDISK_3e41a6ee-2963-4f7f-bd44-4fc104801a1a', 'scsi-SQEMU_QEMU_HARDDISK_3e41a6ee-2963-4f7f-bd44-4fc104801a1a'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-02 00:54:02.200949 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--bc329f0f--76ef--5b6a--a482--1349b51ce957-osd--block--bc329f0f--76ef--5b6a--a482--1349b51ce957'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-tqEXWj-CJSC-viSt-InK1-u1DN-lqDx-dUOwBK', 'scsi-0QEMU_QEMU_HARDDISK_313a743e-b82e-49a6-b933-92b7a7e896a9', 'scsi-SQEMU_QEMU_HARDDISK_313a743e-b82e-49a6-b933-92b7a7e896a9'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-02 00:54:02.200956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-02 00:54:02.200966 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-02 00:54:02.200978 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_01ed39e8-1eff-44a7-98b4-951368397b21', 'scsi-SQEMU_QEMU_HARDDISK_01ed39e8-1eff-44a7-98b4-951368397b21'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-02 00:54:02.200984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-02 00:54:02.200991 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-02 00:54:02.201035 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-02 00:54:02.201045 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-02-00-02-56-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-02 00:54:02.201052 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-02 00:54:02.201058 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-02 00:54:02.201069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d50c10b7-0ee3-49f5-b5db-aaff5f388ee0', 'scsi-SQEMU_QEMU_HARDDISK_d50c10b7-0ee3-49f5-b5db-aaff5f388ee0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d50c10b7-0ee3-49f5-b5db-aaff5f388ee0-part1', 'scsi-SQEMU_QEMU_HARDDISK_d50c10b7-0ee3-49f5-b5db-aaff5f388ee0-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d50c10b7-0ee3-49f5-b5db-aaff5f388ee0-part14', 'scsi-SQEMU_QEMU_HARDDISK_d50c10b7-0ee3-49f5-b5db-aaff5f388ee0-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d50c10b7-0ee3-49f5-b5db-aaff5f388ee0-part15', 'scsi-SQEMU_QEMU_HARDDISK_d50c10b7-0ee3-49f5-b5db-aaff5f388ee0-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d50c10b7-0ee3-49f5-b5db-aaff5f388ee0-part16', 'scsi-SQEMU_QEMU_HARDDISK_d50c10b7-0ee3-49f5-b5db-aaff5f388ee0-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-02 00:54:02.201121 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-02-00-02-51-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-02 00:54:02.201130 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.201148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-02 00:54:02.201154 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-02 00:54:02.201161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-02 00:54:02.201167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-02 00:54:02.201178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-02 00:54:02.201189 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-02 00:54:02.201196 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-02 00:54:02.201202 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-02 00:54:02.201259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8e0fe724-9dc6-457e-9830-68d85bc3f312', 'scsi-SQEMU_QEMU_HARDDISK_8e0fe724-9dc6-457e-9830-68d85bc3f312'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8e0fe724-9dc6-457e-9830-68d85bc3f312-part1', 'scsi-SQEMU_QEMU_HARDDISK_8e0fe724-9dc6-457e-9830-68d85bc3f312-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8e0fe724-9dc6-457e-9830-68d85bc3f312-part14', 'scsi-SQEMU_QEMU_HARDDISK_8e0fe724-9dc6-457e-9830-68d85bc3f312-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8e0fe724-9dc6-457e-9830-68d85bc3f312-part15', 'scsi-SQEMU_QEMU_HARDDISK_8e0fe724-9dc6-457e-9830-68d85bc3f312-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8e0fe724-9dc6-457e-9830-68d85bc3f312-part16', 'scsi-SQEMU_QEMU_HARDDISK_8e0fe724-9dc6-457e-9830-68d85bc3f312-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-02 00:54:02.201270 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-02-00-02-59-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-02 00:54:02.201281 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.201288 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.201299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-02 00:54:02.201306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-02 00:54:02.201314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-02 00:54:02.201321 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-02 00:54:02.201365 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-02 00:54:02.201374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-02 00:54:02.201381 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-02 00:54:02.201387 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-02 00:54:02.201402 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_924a2fdb-5874-4458-8721-81e9cbcbc15b', 'scsi-SQEMU_QEMU_HARDDISK_924a2fdb-5874-4458-8721-81e9cbcbc15b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_924a2fdb-5874-4458-8721-81e9cbcbc15b-part1', 'scsi-SQEMU_QEMU_HARDDISK_924a2fdb-5874-4458-8721-81e9cbcbc15b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_924a2fdb-5874-4458-8721-81e9cbcbc15b-part14', 'scsi-SQEMU_QEMU_HARDDISK_924a2fdb-5874-4458-8721-81e9cbcbc15b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_924a2fdb-5874-4458-8721-81e9cbcbc15b-part15', 'scsi-SQEMU_QEMU_HARDDISK_924a2fdb-5874-4458-8721-81e9cbcbc15b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_924a2fdb-5874-4458-8721-81e9cbcbc15b-part16', 'scsi-SQEMU_QEMU_HARDDISK_924a2fdb-5874-4458-8721-81e9cbcbc15b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-02 00:54:02.201458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-02-00-03-02-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-02 00:54:02.201469 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.201476 | orchestrator | 2026-04-02 00:54:02.201482 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-02 00:54:02.201489 | orchestrator | Thursday 02 April 2026 00:44:37 +0000 (0:00:01.585) 0:00:32.285 ******** 2026-04-02 00:54:02.201496 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3f9aa46c--6044--534e--8fed--f8e8e1b6cabb-osd--block--3f9aa46c--6044--534e--8fed--f8e8e1b6cabb', 'dm-uuid-LVM-1MTXoGF8o53qkDTSPtxC3aThD3vdY9e755qwrpVQd1mUdwCow4Ywk178cgvEkFc3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:54:02.201509 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c3a3e1f2--53da--5696--b7a3--d36d02964763-osd--block--c3a3e1f2--53da--5696--b7a3--d36d02964763', 'dm-uuid-LVM-hTQpxbX1AFLcmtQHNUWfcNukVXxPFxHQ3EK7GjgLiZfCRXz108x0HJCxENks5HKf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:54:02.201518 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:54:02.201526 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:54:02.201532 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:54:02.201581 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:54:02.201590 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:54:02.201597 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:54:02.201610 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:54:02.201629 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:54:02.201678 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f06db598-1059-4957-87c8-4c1fce10345d', 'scsi-SQEMU_QEMU_HARDDISK_f06db598-1059-4957-87c8-4c1fce10345d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f06db598-1059-4957-87c8-4c1fce10345d-part1', 'scsi-SQEMU_QEMU_HARDDISK_f06db598-1059-4957-87c8-4c1fce10345d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f06db598-1059-4957-87c8-4c1fce10345d-part14', 'scsi-SQEMU_QEMU_HARDDISK_f06db598-1059-4957-87c8-4c1fce10345d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f06db598-1059-4957-87c8-4c1fce10345d-part15', 'scsi-SQEMU_QEMU_HARDDISK_f06db598-1059-4957-87c8-4c1fce10345d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f06db598-1059-4957-87c8-4c1fce10345d-part16', 'scsi-SQEMU_QEMU_HARDDISK_f06db598-1059-4957-87c8-4c1fce10345d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:54:02.201694 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--88a5a1a0--9236--5c9d--8025--e39ec03fb505-osd--block--88a5a1a0--9236--5c9d--8025--e39ec03fb505', 'dm-uuid-LVM-WMI3nYyBf7h4a35UZ2BgnO9vqcyxNCNvD7goC0THY81DlzhSjoy8A79FmmjpRb1D'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:54:02.201704 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--3f9aa46c--6044--534e--8fed--f8e8e1b6cabb-osd--block--3f9aa46c--6044--534e--8fed--f8e8e1b6cabb'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-VZf4A8-Vl5Y-RfGE-02Wv-400i-5pCQ-Pd3NQz', 'scsi-0QEMU_QEMU_HARDDISK_2d38c850-3a2f-4695-a83c-0cf43f012ceb', 'scsi-SQEMU_QEMU_HARDDISK_2d38c850-3a2f-4695-a83c-0cf43f012ceb'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:54:02.201712 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b27c5b00--4597--5124--934a--fd641c3feb65-osd--block--b27c5b00--4597--5124--934a--fd641c3feb65', 'dm-uuid-LVM-h3QAXOvc3sBuPMb0fptvx6xk5sLFRoS4xCd5UbEvm5kMw6J5pD02ABDp4W7c0Nb3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:54:02.201756 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--c3a3e1f2--53da--5696--b7a3--d36d02964763-osd--block--c3a3e1f2--53da--5696--b7a3--d36d02964763'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-77F7uO-Apmc-H24C-qSBW-Epdk-PXaJ-z2vjIe', 'scsi-0QEMU_QEMU_HARDDISK_a19da191-4981-42d2-9779-658e739bce45', 'scsi-SQEMU_QEMU_HARDDISK_a19da191-4981-42d2-9779-658e739bce45'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:54:02.201765 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:54:02.201804 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e9e694a9-9d82-484d-8c29-c125fbbe1161', 'scsi-SQEMU_QEMU_HARDDISK_e9e694a9-9d82-484d-8c29-c125fbbe1161'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:54:02.201812 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:54:02.201822 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-02-00-02-53-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:54:02.201829 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:54:02.201886 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:54:02.201896 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:54:02.201908 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:54:02.201915 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ce3dc94c--dd22--5089--bd64--d73b3d29d8ba-osd--block--ce3dc94c--dd22--5089--bd64--d73b3d29d8ba', 'dm-uuid-LVM-G2WDd4XiPx9HORZHRtE3mgDAzOB6fs6NM2nHDmnmtMFr3pKoNoRhNZj7lvGLYpvi'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:54:02.201924 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:54:02.201931 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:54:02.201974 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:54:02.201984 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--bc329f0f--76ef--5b6a--a482--1349b51ce957-osd--block--bc329f0f--76ef--5b6a--a482--1349b51ce957', 'dm-uuid-LVM-sgOQhAC0fLYGMkqYeHhJJ16yOte7p9OEs0xJMNVB78tsOVrhNOvcp9WHftWoN43H'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:54:02.202078 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:54:02.202085 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:54:02.202095 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:54:02.202102 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.202109 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:54:02.202115 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:54:02.202187 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:54:02.202208 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d50c10b7-0ee3-49f5-b5db-aaff5f388ee0', 'scsi-SQEMU_QEMU_HARDDISK_d50c10b7-0ee3-49f5-b5db-aaff5f388ee0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d50c10b7-0ee3-49f5-b5db-aaff5f388ee0-part1', 'scsi-SQEMU_QEMU_HARDDISK_d50c10b7-0ee3-49f5-b5db-aaff5f388ee0-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d50c10b7-0ee3-49f5-b5db-aaff5f388ee0-part14', 'scsi-SQEMU_QEMU_HARDDISK_d50c10b7-0ee3-49f5-b5db-aaff5f388ee0-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d50c10b7-0ee3-49f5-b5db-aaff5f388ee0-part15', 'scsi-SQEMU_QEMU_HARDDISK_d50c10b7-0ee3-49f5-b5db-aaff5f388ee0-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d50c10b7-0ee3-49f5-b5db-aaff5f388ee0-part16', 'scsi-SQEMU_QEMU_HARDDISK_d50c10b7-0ee3-49f5-b5db-aaff5f388ee0-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:54:02.202216 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:54:02.202269 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-02-00-02-51-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:54:02.202278 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:54:02.202290 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:54:02.202298 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:54:02.202308 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:54:02.202352 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0032d447-591d-4b7c-93ad-b7b900e6d05d', 'scsi-SQEMU_QEMU_HARDDISK_0032d447-591d-4b7c-93ad-b7b900e6d05d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0032d447-591d-4b7c-93ad-b7b900e6d05d-part1', 'scsi-SQEMU_QEMU_HARDDISK_0032d447-591d-4b7c-93ad-b7b900e6d05d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0032d447-591d-4b7c-93ad-b7b900e6d05d-part14', 'scsi-SQEMU_QEMU_HARDDISK_0032d447-591d-4b7c-93ad-b7b900e6d05d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0032d447-591d-4b7c-93ad-b7b900e6d05d-part15', 'scsi-SQEMU_QEMU_HARDDISK_0032d447-591d-4b7c-93ad-b7b900e6d05d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0032d447-591d-4b7c-93ad-b7b900e6d05d-part16', 'scsi-SQEMU_QEMU_HARDDISK_0032d447-591d-4b7c-93ad-b7b900e6d05d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:54:02.202368 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:54:02.202375 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:54:02.202387 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--88a5a1a0--9236--5c9d--8025--e39ec03fb505-osd--block--88a5a1a0--9236--5c9d--8025--e39ec03fb505'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-cpchoi-yKuF-8aYI-GwtW-pIyk-I2rQ-1zQAUq', 'scsi-0QEMU_QEMU_HARDDISK_9b931cae-7a06-4f63-bca1-6514ca0f11b4', 'scsi-SQEMU_QEMU_HARDDISK_9b931cae-7a06-4f63-bca1-6514ca0f11b4'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:54:02.202395 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:54:02.202441 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--b27c5b00--4597--5124--934a--fd641c3feb65-osd--block--b27c5b00--4597--5124--934a--fd641c3feb65'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-0MrAeq-GoNi-4r0V-mMLh-NwwX-eFcS-sZLAKx', 'scsi-0QEMU_QEMU_HARDDISK_5e29c7a5-f411-44d3-9f54-46e8ba073aaf', 'scsi-SQEMU_QEMU_HARDDISK_5e29c7a5-f411-44d3-9f54-46e8ba073aaf'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:54:02.202455 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:54:02.202462 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_70d9d3d1-3bd9-44af-99af-3d2d8f65a3c3', 'scsi-SQEMU_QEMU_HARDDISK_70d9d3d1-3bd9-44af-99af-3d2d8f65a3c3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:54:02.202471 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-02-00-02-58-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:54:02.202479 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:54:02.202485 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:54:02.202542 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:54:02.202552 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:54:02.202559 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:54:02.202566 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:54:02.202576 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:54:02.202620 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_213be582-731a-41c2-8309-28c1726af439', 'scsi-SQEMU_QEMU_HARDDISK_213be582-731a-41c2-8309-28c1726af439'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_213be582-731a-41c2-8309-28c1726af439-part1', 'scsi-SQEMU_QEMU_HARDDISK_213be582-731a-41c2-8309-28c1726af439-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_213be582-731a-41c2-8309-28c1726af439-part14', 'scsi-SQEMU_QEMU_HARDDISK_213be582-731a-41c2-8309-28c1726af439-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_213be582-731a-41c2-8309-28c1726af439-part15', 'scsi-SQEMU_QEMU_HARDDISK_213be582-731a-41c2-8309-28c1726af439-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_213be582-731a-41c2-8309-28c1726af439-part16', 'scsi-SQEMU_QEMU_HARDDISK_213be582-731a-41c2-8309-28c1726af439-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:54:02.202633 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--ce3dc94c--dd22--5089--bd64--d73b3d29d8ba-osd--block--ce3dc94c--dd22--5089--bd64--d73b3d29d8ba'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ytxEle-KSA8-0usH-5AQL-iyHO-U5AI-R1BFiA', 'scsi-0QEMU_QEMU_HARDDISK_3e41a6ee-2963-4f7f-bd44-4fc104801a1a', 'scsi-SQEMU_QEMU_HARDDISK_3e41a6ee-2963-4f7f-bd44-4fc104801a1a'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:54:02.202640 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.202652 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:54:02.202659 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.202666 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--bc329f0f--76ef--5b6a--a482--1349b51ce957-osd--block--bc329f0f--76ef--5b6a--a482--1349b51ce957'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-tqEXWj-CJSC-viSt-InK1-u1DN-lqDx-dUOwBK', 'scsi-0QEMU_QEMU_HARDDISK_313a743e-b82e-49a6-b933-92b7a7e896a9', 'scsi-SQEMU_QEMU_HARDDISK_313a743e-b82e-49a6-b933-92b7a7e896a9'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:54:02.202723 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_01ed39e8-1eff-44a7-98b4-951368397b21', 'scsi-SQEMU_QEMU_HARDDISK_01ed39e8-1eff-44a7-98b4-951368397b21'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:54:02.202737 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8e0fe724-9dc6-457e-9830-68d85bc3f312', 'scsi-SQEMU_QEMU_HARDDISK_8e0fe724-9dc6-457e-9830-68d85bc3f312'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8e0fe724-9dc6-457e-9830-68d85bc3f312-part1', 'scsi-SQEMU_QEMU_HARDDISK_8e0fe724-9dc6-457e-9830-68d85bc3f312-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8e0fe724-9dc6-457e-9830-68d85bc3f312-part14', 'scsi-SQEMU_QEMU_HARDDISK_8e0fe724-9dc6-457e-9830-68d85bc3f312-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8e0fe724-9dc6-457e-9830-68d85bc3f312-part15', 'scsi-SQEMU_QEMU_HARDDISK_8e0fe724-9dc6-457e-9830-68d85bc3f312-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8e0fe724-9dc6-457e-9830-68d85bc3f312-part16', 'scsi-SQEMU_QEMU_HARDDISK_8e0fe724-9dc6-457e-9830-68d85bc3f312-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:54:02.202745 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-02-00-02-59-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:54:02.202786 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-02-00-02-56-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:54:02.202800 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.202806 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.202813 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:54:02.202819 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:54:02.202826 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:54:02.202835 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:54:02.202842 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:54:02.202852 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:54:02.202905 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:54:02.202914 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:54:02.202925 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_924a2fdb-5874-4458-8721-81e9cbcbc15b', 'scsi-SQEMU_QEMU_HARDDISK_924a2fdb-5874-4458-8721-81e9cbcbc15b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_924a2fdb-5874-4458-8721-81e9cbcbc15b-part1', 'scsi-SQEMU_QEMU_HARDDISK_924a2fdb-5874-4458-8721-81e9cbcbc15b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_924a2fdb-5874-4458-8721-81e9cbcbc15b-part14', 'scsi-SQEMU_QEMU_HARDDISK_924a2fdb-5874-4458-8721-81e9cbcbc15b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_924a2fdb-5874-4458-8721-81e9cbcbc15b-part15', 'scsi-SQEMU_QEMU_HARDDISK_924a2fdb-5874-4458-8721-81e9cbcbc15b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_924a2fdb-5874-4458-8721-81e9cbcbc15b-part16', 'scsi-SQEMU_QEMU_HARDDISK_924a2fdb-5874-4458-8721-81e9cbcbc15b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:54:02.202937 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-02-00-03-02-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:54:02.202944 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.202950 | orchestrator | 2026-04-02 00:54:02.202993 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-02 00:54:02.203002 | orchestrator | Thursday 02 April 2026 00:44:38 +0000 (0:00:01.275) 0:00:33.561 ******** 2026-04-02 00:54:02.203009 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:54:02.203015 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:54:02.203022 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:54:02.203028 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:54:02.203034 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:54:02.203050 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:54:02.203057 | orchestrator | 2026-04-02 00:54:02.203064 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-02 00:54:02.203071 | orchestrator | Thursday 02 April 2026 00:44:39 +0000 (0:00:01.362) 0:00:34.924 ******** 2026-04-02 00:54:02.203077 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:54:02.203083 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:54:02.203089 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:54:02.203095 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:54:02.203102 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:54:02.203108 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:54:02.203115 | orchestrator | 2026-04-02 00:54:02.203121 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-02 00:54:02.203128 | orchestrator | Thursday 02 April 2026 00:44:40 +0000 (0:00:00.949) 0:00:35.873 ******** 2026-04-02 00:54:02.203145 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.203151 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.203156 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.203163 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.203169 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.203175 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.203181 | orchestrator | 2026-04-02 00:54:02.203188 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-02 00:54:02.203194 | orchestrator | Thursday 02 April 2026 00:44:41 +0000 (0:00:00.885) 0:00:36.758 ******** 2026-04-02 00:54:02.203200 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.203206 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.203212 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.203218 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.203232 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.203238 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.203244 | orchestrator | 2026-04-02 00:54:02.203251 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-02 00:54:02.203258 | orchestrator | Thursday 02 April 2026 00:44:41 +0000 (0:00:00.461) 0:00:37.220 ******** 2026-04-02 00:54:02.203264 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.203271 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.203277 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.203283 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.203296 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.203302 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.203308 | orchestrator | 2026-04-02 00:54:02.203315 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-02 00:54:02.203321 | orchestrator | Thursday 02 April 2026 00:44:42 +0000 (0:00:00.949) 0:00:38.170 ******** 2026-04-02 00:54:02.203328 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.203334 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.203342 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.203348 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.203355 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.203361 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.203368 | orchestrator | 2026-04-02 00:54:02.203374 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-02 00:54:02.203381 | orchestrator | Thursday 02 April 2026 00:44:43 +0000 (0:00:00.584) 0:00:38.755 ******** 2026-04-02 00:54:02.203387 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-04-02 00:54:02.203400 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-04-02 00:54:02.203407 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-04-02 00:54:02.203413 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-04-02 00:54:02.203419 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-04-02 00:54:02.203426 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-02 00:54:02.203433 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-04-02 00:54:02.203440 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-04-02 00:54:02.203447 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-04-02 00:54:02.203453 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-04-02 00:54:02.203459 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-04-02 00:54:02.203465 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-04-02 00:54:02.203471 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-02 00:54:02.203478 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-04-02 00:54:02.203484 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-02 00:54:02.203490 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-02 00:54:02.203497 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-04-02 00:54:02.203503 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-02 00:54:02.203510 | orchestrator | 2026-04-02 00:54:02.203517 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-02 00:54:02.203523 | orchestrator | Thursday 02 April 2026 00:44:47 +0000 (0:00:04.179) 0:00:42.934 ******** 2026-04-02 00:54:02.203529 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-02 00:54:02.203536 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-02 00:54:02.203542 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-02 00:54:02.203548 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.203554 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-02 00:54:02.203560 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-02 00:54:02.203567 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-02 00:54:02.203573 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.203580 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-02 00:54:02.203612 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-02 00:54:02.203619 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-02 00:54:02.203625 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.203632 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-02 00:54:02.203638 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-02 00:54:02.203646 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-02 00:54:02.203657 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.203663 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-02 00:54:02.203670 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-02 00:54:02.203677 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-02 00:54:02.203683 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.203690 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-02 00:54:02.203696 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-02 00:54:02.203703 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-02 00:54:02.203709 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.203716 | orchestrator | 2026-04-02 00:54:02.203722 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-02 00:54:02.203729 | orchestrator | Thursday 02 April 2026 00:44:48 +0000 (0:00:01.266) 0:00:44.201 ******** 2026-04-02 00:54:02.203736 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.203776 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.203784 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.203791 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-02 00:54:02.203798 | orchestrator | 2026-04-02 00:54:02.203805 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-02 00:54:02.203812 | orchestrator | Thursday 02 April 2026 00:44:49 +0000 (0:00:01.019) 0:00:45.220 ******** 2026-04-02 00:54:02.203819 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.203825 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.203832 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.203838 | orchestrator | 2026-04-02 00:54:02.203845 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-02 00:54:02.203851 | orchestrator | Thursday 02 April 2026 00:44:50 +0000 (0:00:00.423) 0:00:45.644 ******** 2026-04-02 00:54:02.203857 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.203863 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.203869 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.203876 | orchestrator | 2026-04-02 00:54:02.203883 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-02 00:54:02.203890 | orchestrator | Thursday 02 April 2026 00:44:50 +0000 (0:00:00.368) 0:00:46.012 ******** 2026-04-02 00:54:02.203896 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.203902 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.203908 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.203914 | orchestrator | 2026-04-02 00:54:02.203920 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-02 00:54:02.203927 | orchestrator | Thursday 02 April 2026 00:44:51 +0000 (0:00:00.570) 0:00:46.583 ******** 2026-04-02 00:54:02.203932 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:54:02.203938 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:54:02.203944 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:54:02.203950 | orchestrator | 2026-04-02 00:54:02.203960 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-02 00:54:02.203966 | orchestrator | Thursday 02 April 2026 00:44:52 +0000 (0:00:00.843) 0:00:47.427 ******** 2026-04-02 00:54:02.203972 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-02 00:54:02.203979 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-02 00:54:02.203986 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-02 00:54:02.203992 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.203999 | orchestrator | 2026-04-02 00:54:02.204005 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-02 00:54:02.204012 | orchestrator | Thursday 02 April 2026 00:44:52 +0000 (0:00:00.295) 0:00:47.723 ******** 2026-04-02 00:54:02.204031 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-02 00:54:02.204038 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-02 00:54:02.204044 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-02 00:54:02.204051 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.204057 | orchestrator | 2026-04-02 00:54:02.204064 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-02 00:54:02.204070 | orchestrator | Thursday 02 April 2026 00:44:52 +0000 (0:00:00.283) 0:00:48.006 ******** 2026-04-02 00:54:02.204077 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-02 00:54:02.204083 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-02 00:54:02.204089 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-02 00:54:02.204096 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.204102 | orchestrator | 2026-04-02 00:54:02.204108 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-02 00:54:02.204114 | orchestrator | Thursday 02 April 2026 00:44:53 +0000 (0:00:00.276) 0:00:48.283 ******** 2026-04-02 00:54:02.204120 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:54:02.204127 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:54:02.204144 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:54:02.204150 | orchestrator | 2026-04-02 00:54:02.204157 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-02 00:54:02.204164 | orchestrator | Thursday 02 April 2026 00:44:53 +0000 (0:00:00.344) 0:00:48.627 ******** 2026-04-02 00:54:02.204170 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-02 00:54:02.204176 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-02 00:54:02.204207 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-02 00:54:02.204215 | orchestrator | 2026-04-02 00:54:02.204221 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-02 00:54:02.204227 | orchestrator | Thursday 02 April 2026 00:44:54 +0000 (0:00:01.266) 0:00:49.894 ******** 2026-04-02 00:54:02.204234 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-02 00:54:02.204241 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-02 00:54:02.204247 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-02 00:54:02.204253 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-02 00:54:02.204260 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-02 00:54:02.204267 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-02 00:54:02.204273 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-02 00:54:02.204280 | orchestrator | 2026-04-02 00:54:02.204286 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-02 00:54:02.204293 | orchestrator | Thursday 02 April 2026 00:44:55 +0000 (0:00:00.881) 0:00:50.775 ******** 2026-04-02 00:54:02.204299 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-02 00:54:02.204305 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-02 00:54:02.204312 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-02 00:54:02.204318 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-02 00:54:02.204325 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-02 00:54:02.204331 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-02 00:54:02.204337 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-02 00:54:02.204343 | orchestrator | 2026-04-02 00:54:02.204350 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-02 00:54:02.204361 | orchestrator | Thursday 02 April 2026 00:44:58 +0000 (0:00:02.453) 0:00:53.229 ******** 2026-04-02 00:54:02.204368 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-4, testbed-node-3, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 00:54:02.204375 | orchestrator | 2026-04-02 00:54:02.204382 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-02 00:54:02.204388 | orchestrator | Thursday 02 April 2026 00:44:59 +0000 (0:00:01.222) 0:00:54.451 ******** 2026-04-02 00:54:02.204395 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 00:54:02.204401 | orchestrator | 2026-04-02 00:54:02.204407 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-02 00:54:02.204413 | orchestrator | Thursday 02 April 2026 00:45:00 +0000 (0:00:01.146) 0:00:55.597 ******** 2026-04-02 00:54:02.204420 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.204430 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.204436 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.204442 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:54:02.204449 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:54:02.204456 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:54:02.204462 | orchestrator | 2026-04-02 00:54:02.204469 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-02 00:54:02.204475 | orchestrator | Thursday 02 April 2026 00:45:01 +0000 (0:00:01.103) 0:00:56.701 ******** 2026-04-02 00:54:02.204482 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.204488 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:54:02.204494 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.204500 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.204507 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:54:02.204514 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:54:02.204520 | orchestrator | 2026-04-02 00:54:02.204527 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-02 00:54:02.204533 | orchestrator | Thursday 02 April 2026 00:45:02 +0000 (0:00:00.819) 0:00:57.520 ******** 2026-04-02 00:54:02.204539 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.204546 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:54:02.204552 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.204558 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:54:02.204564 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.204570 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:54:02.204577 | orchestrator | 2026-04-02 00:54:02.204583 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-02 00:54:02.204590 | orchestrator | Thursday 02 April 2026 00:45:03 +0000 (0:00:01.152) 0:00:58.673 ******** 2026-04-02 00:54:02.204597 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.204603 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.204609 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:54:02.204616 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.204622 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:54:02.204628 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:54:02.204635 | orchestrator | 2026-04-02 00:54:02.204641 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-02 00:54:02.204647 | orchestrator | Thursday 02 April 2026 00:45:04 +0000 (0:00:00.844) 0:00:59.518 ******** 2026-04-02 00:54:02.204654 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.204661 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.204667 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.204674 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:54:02.204680 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:54:02.204706 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:54:02.204714 | orchestrator | 2026-04-02 00:54:02.204720 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-02 00:54:02.204731 | orchestrator | Thursday 02 April 2026 00:45:05 +0000 (0:00:01.631) 0:01:01.150 ******** 2026-04-02 00:54:02.204737 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.204743 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.204750 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.204756 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.204762 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.204769 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.204775 | orchestrator | 2026-04-02 00:54:02.204782 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-02 00:54:02.204788 | orchestrator | Thursday 02 April 2026 00:45:06 +0000 (0:00:00.878) 0:01:02.028 ******** 2026-04-02 00:54:02.204795 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.204801 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.204808 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.204814 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.204820 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.204826 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.204832 | orchestrator | 2026-04-02 00:54:02.204839 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-02 00:54:02.204845 | orchestrator | Thursday 02 April 2026 00:45:07 +0000 (0:00:00.886) 0:01:02.915 ******** 2026-04-02 00:54:02.204851 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:54:02.204857 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:54:02.204863 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:54:02.204869 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:54:02.204876 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:54:02.204883 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:54:02.204889 | orchestrator | 2026-04-02 00:54:02.204896 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-02 00:54:02.204902 | orchestrator | Thursday 02 April 2026 00:45:08 +0000 (0:00:01.307) 0:01:04.222 ******** 2026-04-02 00:54:02.204908 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:54:02.204914 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:54:02.204921 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:54:02.204927 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:54:02.204933 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:54:02.204940 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:54:02.204946 | orchestrator | 2026-04-02 00:54:02.204952 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-02 00:54:02.204959 | orchestrator | Thursday 02 April 2026 00:45:10 +0000 (0:00:01.223) 0:01:05.446 ******** 2026-04-02 00:54:02.204965 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.204971 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.204978 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.204984 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.204990 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.204997 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.205003 | orchestrator | 2026-04-02 00:54:02.205009 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-02 00:54:02.205015 | orchestrator | Thursday 02 April 2026 00:45:11 +0000 (0:00:00.797) 0:01:06.244 ******** 2026-04-02 00:54:02.205022 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.205028 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.205035 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.205041 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:54:02.205047 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:54:02.205054 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:54:02.205060 | orchestrator | 2026-04-02 00:54:02.205066 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-02 00:54:02.205072 | orchestrator | Thursday 02 April 2026 00:45:11 +0000 (0:00:00.623) 0:01:06.868 ******** 2026-04-02 00:54:02.205079 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:54:02.205088 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:54:02.205094 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:54:02.205105 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.205111 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.205117 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.205124 | orchestrator | 2026-04-02 00:54:02.205139 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-02 00:54:02.205147 | orchestrator | Thursday 02 April 2026 00:45:12 +0000 (0:00:00.942) 0:01:07.811 ******** 2026-04-02 00:54:02.205153 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:54:02.205159 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:54:02.205166 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:54:02.205173 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.205179 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.205186 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.205192 | orchestrator | 2026-04-02 00:54:02.205199 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-02 00:54:02.205205 | orchestrator | Thursday 02 April 2026 00:45:13 +0000 (0:00:00.525) 0:01:08.336 ******** 2026-04-02 00:54:02.205212 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:54:02.205218 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:54:02.205224 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:54:02.205231 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.205237 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.205244 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.205250 | orchestrator | 2026-04-02 00:54:02.205256 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-02 00:54:02.205263 | orchestrator | Thursday 02 April 2026 00:45:13 +0000 (0:00:00.842) 0:01:09.179 ******** 2026-04-02 00:54:02.205269 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.205276 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.205282 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.205288 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.205295 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.205301 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.205308 | orchestrator | 2026-04-02 00:54:02.205314 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-02 00:54:02.205321 | orchestrator | Thursday 02 April 2026 00:45:14 +0000 (0:00:00.685) 0:01:09.864 ******** 2026-04-02 00:54:02.205327 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.205333 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.205339 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.205345 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.205372 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.205380 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.205387 | orchestrator | 2026-04-02 00:54:02.205393 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-02 00:54:02.205400 | orchestrator | Thursday 02 April 2026 00:45:15 +0000 (0:00:00.805) 0:01:10.670 ******** 2026-04-02 00:54:02.205406 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.205413 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.205418 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.205422 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:54:02.205426 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:54:02.205431 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:54:02.205438 | orchestrator | 2026-04-02 00:54:02.205444 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-02 00:54:02.205450 | orchestrator | Thursday 02 April 2026 00:45:16 +0000 (0:00:00.847) 0:01:11.517 ******** 2026-04-02 00:54:02.205456 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:54:02.205462 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:54:02.205469 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:54:02.205475 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:54:02.205481 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:54:02.205487 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:54:02.205494 | orchestrator | 2026-04-02 00:54:02.205507 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-02 00:54:02.205513 | orchestrator | Thursday 02 April 2026 00:45:17 +0000 (0:00:00.850) 0:01:12.368 ******** 2026-04-02 00:54:02.205520 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:54:02.205526 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:54:02.205532 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:54:02.205538 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:54:02.205544 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:54:02.205550 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:54:02.205557 | orchestrator | 2026-04-02 00:54:02.205563 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-02 00:54:02.205570 | orchestrator | Thursday 02 April 2026 00:45:18 +0000 (0:00:01.427) 0:01:13.796 ******** 2026-04-02 00:54:02.205576 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:54:02.205583 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:54:02.205589 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:54:02.205596 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:54:02.205602 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:54:02.205608 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:54:02.205615 | orchestrator | 2026-04-02 00:54:02.205621 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-02 00:54:02.205627 | orchestrator | Thursday 02 April 2026 00:45:21 +0000 (0:00:02.480) 0:01:16.277 ******** 2026-04-02 00:54:02.205633 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:54:02.205639 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:54:02.205646 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:54:02.205652 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:54:02.205659 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:54:02.205665 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:54:02.205671 | orchestrator | 2026-04-02 00:54:02.205678 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-02 00:54:02.205684 | orchestrator | Thursday 02 April 2026 00:45:23 +0000 (0:00:02.753) 0:01:19.030 ******** 2026-04-02 00:54:02.205691 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 00:54:02.205698 | orchestrator | 2026-04-02 00:54:02.205704 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-02 00:54:02.205711 | orchestrator | Thursday 02 April 2026 00:45:24 +0000 (0:00:01.134) 0:01:20.164 ******** 2026-04-02 00:54:02.205721 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.205728 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.205734 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.205741 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.205747 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.205754 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.205760 | orchestrator | 2026-04-02 00:54:02.205766 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-02 00:54:02.205772 | orchestrator | Thursday 02 April 2026 00:45:25 +0000 (0:00:00.568) 0:01:20.733 ******** 2026-04-02 00:54:02.205779 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.205785 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.205791 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.205798 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.205805 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.205811 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.205818 | orchestrator | 2026-04-02 00:54:02.205824 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-02 00:54:02.205831 | orchestrator | Thursday 02 April 2026 00:45:26 +0000 (0:00:00.785) 0:01:21.519 ******** 2026-04-02 00:54:02.205837 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-02 00:54:02.205844 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-02 00:54:02.205856 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-02 00:54:02.205863 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-02 00:54:02.205869 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-02 00:54:02.205876 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-02 00:54:02.205882 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-02 00:54:02.205888 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-02 00:54:02.205894 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-02 00:54:02.205900 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-02 00:54:02.205927 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-02 00:54:02.205935 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-02 00:54:02.205942 | orchestrator | 2026-04-02 00:54:02.205948 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-02 00:54:02.205955 | orchestrator | Thursday 02 April 2026 00:45:27 +0000 (0:00:01.459) 0:01:22.979 ******** 2026-04-02 00:54:02.205961 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:54:02.205967 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:54:02.205974 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:54:02.205980 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:54:02.205986 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:54:02.205993 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:54:02.205999 | orchestrator | 2026-04-02 00:54:02.206006 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-02 00:54:02.206035 | orchestrator | Thursday 02 April 2026 00:45:28 +0000 (0:00:01.173) 0:01:24.153 ******** 2026-04-02 00:54:02.206045 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.206051 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.206057 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.206064 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.206070 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.206076 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.206083 | orchestrator | 2026-04-02 00:54:02.206089 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-02 00:54:02.206095 | orchestrator | Thursday 02 April 2026 00:45:29 +0000 (0:00:00.635) 0:01:24.788 ******** 2026-04-02 00:54:02.206102 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.206109 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.206115 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.206121 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.206128 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.206171 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.206178 | orchestrator | 2026-04-02 00:54:02.206184 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-02 00:54:02.206191 | orchestrator | Thursday 02 April 2026 00:45:30 +0000 (0:00:00.823) 0:01:25.611 ******** 2026-04-02 00:54:02.206198 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.206204 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.206210 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.206216 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.206223 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.206229 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.206235 | orchestrator | 2026-04-02 00:54:02.206241 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-02 00:54:02.206247 | orchestrator | Thursday 02 April 2026 00:45:30 +0000 (0:00:00.590) 0:01:26.202 ******** 2026-04-02 00:54:02.206254 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 00:54:02.206270 | orchestrator | 2026-04-02 00:54:02.206277 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-02 00:54:02.206284 | orchestrator | Thursday 02 April 2026 00:45:32 +0000 (0:00:01.268) 0:01:27.470 ******** 2026-04-02 00:54:02.206290 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:54:02.206296 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:54:02.206303 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:54:02.206309 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:54:02.206315 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:54:02.206322 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:54:02.206328 | orchestrator | 2026-04-02 00:54:02.206368 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-02 00:54:02.206376 | orchestrator | Thursday 02 April 2026 00:46:29 +0000 (0:00:57.327) 0:02:24.798 ******** 2026-04-02 00:54:02.206383 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-02 00:54:02.206389 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-02 00:54:02.206396 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-02 00:54:02.206402 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.206408 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-02 00:54:02.206414 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-02 00:54:02.206421 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-02 00:54:02.206428 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.206434 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-02 00:54:02.206441 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-02 00:54:02.206447 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-02 00:54:02.206454 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.206460 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-02 00:54:02.206466 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-02 00:54:02.206473 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-02 00:54:02.206479 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.206485 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-02 00:54:02.206492 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-02 00:54:02.206498 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-02 00:54:02.206504 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.206534 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-02 00:54:02.206542 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-02 00:54:02.206548 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-02 00:54:02.206555 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.206561 | orchestrator | 2026-04-02 00:54:02.206568 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-02 00:54:02.206574 | orchestrator | Thursday 02 April 2026 00:46:30 +0000 (0:00:00.589) 0:02:25.388 ******** 2026-04-02 00:54:02.206580 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.206587 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.206593 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.206600 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.206607 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.206613 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.206625 | orchestrator | 2026-04-02 00:54:02.206632 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-02 00:54:02.206638 | orchestrator | Thursday 02 April 2026 00:46:31 +0000 (0:00:00.959) 0:02:26.347 ******** 2026-04-02 00:54:02.206645 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.206651 | orchestrator | 2026-04-02 00:54:02.206657 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-02 00:54:02.206663 | orchestrator | Thursday 02 April 2026 00:46:31 +0000 (0:00:00.175) 0:02:26.522 ******** 2026-04-02 00:54:02.206670 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.206676 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.206682 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.206689 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.206695 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.206701 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.206708 | orchestrator | 2026-04-02 00:54:02.206715 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-02 00:54:02.206721 | orchestrator | Thursday 02 April 2026 00:46:31 +0000 (0:00:00.682) 0:02:27.205 ******** 2026-04-02 00:54:02.206727 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.206734 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.206740 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.206746 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.206752 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.206758 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.206764 | orchestrator | 2026-04-02 00:54:02.206771 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-02 00:54:02.206778 | orchestrator | Thursday 02 April 2026 00:46:32 +0000 (0:00:00.748) 0:02:27.954 ******** 2026-04-02 00:54:02.206784 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.206791 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.206797 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.206803 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.206809 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.206816 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.206823 | orchestrator | 2026-04-02 00:54:02.206829 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-02 00:54:02.206835 | orchestrator | Thursday 02 April 2026 00:46:33 +0000 (0:00:00.637) 0:02:28.591 ******** 2026-04-02 00:54:02.206842 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:54:02.206848 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:54:02.206854 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:54:02.206861 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:54:02.206867 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:54:02.206874 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:54:02.206880 | orchestrator | 2026-04-02 00:54:02.206886 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-02 00:54:02.206899 | orchestrator | Thursday 02 April 2026 00:46:35 +0000 (0:00:02.446) 0:02:31.037 ******** 2026-04-02 00:54:02.206906 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:54:02.206912 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:54:02.206918 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:54:02.206924 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:54:02.206931 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:54:02.206938 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:54:02.206944 | orchestrator | 2026-04-02 00:54:02.206950 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-02 00:54:02.206957 | orchestrator | Thursday 02 April 2026 00:46:36 +0000 (0:00:00.548) 0:02:31.585 ******** 2026-04-02 00:54:02.206964 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-5, testbed-node-4, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 00:54:02.206971 | orchestrator | 2026-04-02 00:54:02.206977 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-02 00:54:02.206988 | orchestrator | Thursday 02 April 2026 00:46:37 +0000 (0:00:01.023) 0:02:32.608 ******** 2026-04-02 00:54:02.206995 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.207001 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.207007 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.207014 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.207020 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.207027 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.207033 | orchestrator | 2026-04-02 00:54:02.207040 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-02 00:54:02.207046 | orchestrator | Thursday 02 April 2026 00:46:38 +0000 (0:00:00.718) 0:02:33.327 ******** 2026-04-02 00:54:02.207052 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.207059 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.207066 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.207072 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.207078 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.207084 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.207091 | orchestrator | 2026-04-02 00:54:02.207097 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-02 00:54:02.207104 | orchestrator | Thursday 02 April 2026 00:46:38 +0000 (0:00:00.657) 0:02:33.984 ******** 2026-04-02 00:54:02.207110 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.207117 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.207176 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.207185 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.207191 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.207197 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.207204 | orchestrator | 2026-04-02 00:54:02.207211 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-02 00:54:02.207217 | orchestrator | Thursday 02 April 2026 00:46:39 +0000 (0:00:00.615) 0:02:34.600 ******** 2026-04-02 00:54:02.207223 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.207230 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.207236 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.207242 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.207248 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.207254 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.207260 | orchestrator | 2026-04-02 00:54:02.207267 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-02 00:54:02.207273 | orchestrator | Thursday 02 April 2026 00:46:40 +0000 (0:00:00.763) 0:02:35.363 ******** 2026-04-02 00:54:02.207279 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.207285 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.207292 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.207298 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.207304 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.207310 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.207316 | orchestrator | 2026-04-02 00:54:02.207323 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-02 00:54:02.207330 | orchestrator | Thursday 02 April 2026 00:46:40 +0000 (0:00:00.646) 0:02:36.009 ******** 2026-04-02 00:54:02.207336 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.207343 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.207349 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.207355 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.207361 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.207368 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.207375 | orchestrator | 2026-04-02 00:54:02.207381 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-02 00:54:02.207388 | orchestrator | Thursday 02 April 2026 00:46:41 +0000 (0:00:00.686) 0:02:36.696 ******** 2026-04-02 00:54:02.207394 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.207405 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.207412 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.207418 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.207424 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.207430 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.207437 | orchestrator | 2026-04-02 00:54:02.207443 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-02 00:54:02.207450 | orchestrator | Thursday 02 April 2026 00:46:41 +0000 (0:00:00.492) 0:02:37.189 ******** 2026-04-02 00:54:02.207456 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.207462 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.207468 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.207475 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.207481 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.207488 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.207494 | orchestrator | 2026-04-02 00:54:02.207500 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-02 00:54:02.207506 | orchestrator | Thursday 02 April 2026 00:46:42 +0000 (0:00:00.745) 0:02:37.934 ******** 2026-04-02 00:54:02.207512 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:54:02.207518 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:54:02.207524 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:54:02.207530 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:54:02.207537 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:54:02.207543 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:54:02.207550 | orchestrator | 2026-04-02 00:54:02.207560 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-02 00:54:02.207566 | orchestrator | Thursday 02 April 2026 00:46:43 +0000 (0:00:01.122) 0:02:39.057 ******** 2026-04-02 00:54:02.207573 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 00:54:02.207579 | orchestrator | 2026-04-02 00:54:02.207585 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-02 00:54:02.207592 | orchestrator | Thursday 02 April 2026 00:46:44 +0000 (0:00:01.099) 0:02:40.156 ******** 2026-04-02 00:54:02.207599 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-04-02 00:54:02.207606 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-04-02 00:54:02.207612 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-04-02 00:54:02.207619 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-04-02 00:54:02.207625 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-04-02 00:54:02.207631 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-04-02 00:54:02.207638 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-04-02 00:54:02.207644 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-04-02 00:54:02.207650 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-04-02 00:54:02.207656 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-04-02 00:54:02.207663 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-04-02 00:54:02.207670 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-04-02 00:54:02.207676 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-04-02 00:54:02.207682 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-04-02 00:54:02.207689 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-04-02 00:54:02.207695 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-04-02 00:54:02.207701 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-04-02 00:54:02.207708 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-04-02 00:54:02.207734 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-04-02 00:54:02.207742 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-04-02 00:54:02.207753 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-04-02 00:54:02.207759 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-04-02 00:54:02.207766 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-04-02 00:54:02.207772 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-04-02 00:54:02.207778 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-04-02 00:54:02.207784 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-04-02 00:54:02.207791 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-04-02 00:54:02.207797 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-04-02 00:54:02.207803 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-04-02 00:54:02.207809 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-04-02 00:54:02.207815 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-04-02 00:54:02.207821 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-04-02 00:54:02.207828 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-04-02 00:54:02.207835 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-04-02 00:54:02.207842 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-04-02 00:54:02.207848 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-04-02 00:54:02.207854 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-04-02 00:54:02.207860 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-04-02 00:54:02.207867 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-04-02 00:54:02.207873 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-04-02 00:54:02.207879 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-04-02 00:54:02.207885 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-04-02 00:54:02.207891 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-04-02 00:54:02.207897 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-04-02 00:54:02.207904 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-02 00:54:02.207911 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-04-02 00:54:02.207917 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-04-02 00:54:02.207924 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-02 00:54:02.207930 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-02 00:54:02.207936 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-04-02 00:54:02.207943 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-02 00:54:02.207949 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-04-02 00:54:02.207955 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-02 00:54:02.207961 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-02 00:54:02.207967 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-02 00:54:02.207977 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-02 00:54:02.207983 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-02 00:54:02.207990 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-02 00:54:02.207996 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-02 00:54:02.208003 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-02 00:54:02.208009 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-02 00:54:02.208015 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-02 00:54:02.208027 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-02 00:54:02.208033 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-02 00:54:02.208040 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-02 00:54:02.208046 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-02 00:54:02.208052 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-02 00:54:02.208058 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-02 00:54:02.208065 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-02 00:54:02.208071 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-02 00:54:02.208077 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-02 00:54:02.208083 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-02 00:54:02.208090 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-02 00:54:02.208096 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-02 00:54:02.208102 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-02 00:54:02.208108 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-02 00:54:02.208141 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-02 00:54:02.208149 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-02 00:54:02.208156 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-02 00:54:02.208163 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-04-02 00:54:02.208170 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-02 00:54:02.208177 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-02 00:54:02.208183 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-02 00:54:02.208189 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-04-02 00:54:02.208196 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-04-02 00:54:02.208203 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-04-02 00:54:02.208209 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-02 00:54:02.208216 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-04-02 00:54:02.208222 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-02 00:54:02.208228 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-04-02 00:54:02.208235 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-04-02 00:54:02.208241 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-04-02 00:54:02.208247 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-04-02 00:54:02.208254 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-04-02 00:54:02.208260 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-04-02 00:54:02.208267 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-04-02 00:54:02.208273 | orchestrator | 2026-04-02 00:54:02.208279 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-02 00:54:02.208286 | orchestrator | Thursday 02 April 2026 00:46:51 +0000 (0:00:06.274) 0:02:46.431 ******** 2026-04-02 00:54:02.208292 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.208298 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.208304 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.208311 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-02 00:54:02.208323 | orchestrator | 2026-04-02 00:54:02.208329 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-04-02 00:54:02.208335 | orchestrator | Thursday 02 April 2026 00:46:52 +0000 (0:00:00.975) 0:02:47.406 ******** 2026-04-02 00:54:02.208341 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-02 00:54:02.208348 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-02 00:54:02.208355 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-02 00:54:02.208361 | orchestrator | 2026-04-02 00:54:02.208368 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-04-02 00:54:02.208377 | orchestrator | Thursday 02 April 2026 00:46:53 +0000 (0:00:00.977) 0:02:48.383 ******** 2026-04-02 00:54:02.208384 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-02 00:54:02.208390 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-02 00:54:02.208396 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-02 00:54:02.208403 | orchestrator | 2026-04-02 00:54:02.208409 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-02 00:54:02.208415 | orchestrator | Thursday 02 April 2026 00:46:54 +0000 (0:00:01.724) 0:02:50.108 ******** 2026-04-02 00:54:02.208422 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:54:02.208428 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:54:02.208435 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:54:02.208441 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.208448 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.208454 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.208461 | orchestrator | 2026-04-02 00:54:02.208467 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-02 00:54:02.208473 | orchestrator | Thursday 02 April 2026 00:46:55 +0000 (0:00:00.733) 0:02:50.841 ******** 2026-04-02 00:54:02.208480 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:54:02.208486 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:54:02.208493 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:54:02.208499 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.208506 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.208512 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.208518 | orchestrator | 2026-04-02 00:54:02.208525 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-02 00:54:02.208531 | orchestrator | Thursday 02 April 2026 00:46:56 +0000 (0:00:00.783) 0:02:51.625 ******** 2026-04-02 00:54:02.208538 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.208544 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.208551 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.208557 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.208564 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.208570 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.208576 | orchestrator | 2026-04-02 00:54:02.208601 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-02 00:54:02.208608 | orchestrator | Thursday 02 April 2026 00:46:57 +0000 (0:00:00.662) 0:02:52.288 ******** 2026-04-02 00:54:02.208615 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.208621 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.208627 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.208634 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.208640 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.208646 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.208653 | orchestrator | 2026-04-02 00:54:02.208667 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-02 00:54:02.208674 | orchestrator | Thursday 02 April 2026 00:46:57 +0000 (0:00:00.903) 0:02:53.191 ******** 2026-04-02 00:54:02.208680 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.208686 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.208693 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.208699 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.208705 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.208711 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.208717 | orchestrator | 2026-04-02 00:54:02.208724 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-02 00:54:02.208730 | orchestrator | Thursday 02 April 2026 00:46:58 +0000 (0:00:00.757) 0:02:53.948 ******** 2026-04-02 00:54:02.208737 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.208743 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.208750 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.208756 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.208763 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.208769 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.208776 | orchestrator | 2026-04-02 00:54:02.208782 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-02 00:54:02.208789 | orchestrator | Thursday 02 April 2026 00:46:59 +0000 (0:00:00.576) 0:02:54.525 ******** 2026-04-02 00:54:02.208795 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.208802 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.208808 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.208815 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.208821 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.208827 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.208834 | orchestrator | 2026-04-02 00:54:02.208840 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-02 00:54:02.208847 | orchestrator | Thursday 02 April 2026 00:47:00 +0000 (0:00:00.895) 0:02:55.420 ******** 2026-04-02 00:54:02.208853 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.208859 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.208865 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.208872 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.208878 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.208884 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.208891 | orchestrator | 2026-04-02 00:54:02.208897 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-02 00:54:02.208903 | orchestrator | Thursday 02 April 2026 00:47:00 +0000 (0:00:00.772) 0:02:56.192 ******** 2026-04-02 00:54:02.208910 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.208916 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.208922 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.208928 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:54:02.208934 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:54:02.208940 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:54:02.208947 | orchestrator | 2026-04-02 00:54:02.208953 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-02 00:54:02.208963 | orchestrator | Thursday 02 April 2026 00:47:02 +0000 (0:00:01.677) 0:02:57.870 ******** 2026-04-02 00:54:02.208970 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:54:02.208976 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:54:02.208982 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:54:02.208989 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.208995 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.209001 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.209007 | orchestrator | 2026-04-02 00:54:02.209014 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-02 00:54:02.209025 | orchestrator | Thursday 02 April 2026 00:47:03 +0000 (0:00:00.870) 0:02:58.740 ******** 2026-04-02 00:54:02.209031 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:54:02.209038 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:54:02.209044 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:54:02.209050 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.209057 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.209063 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.209070 | orchestrator | 2026-04-02 00:54:02.209076 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-02 00:54:02.209083 | orchestrator | Thursday 02 April 2026 00:47:04 +0000 (0:00:01.409) 0:03:00.150 ******** 2026-04-02 00:54:02.209090 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.209096 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.209103 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.209109 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.209116 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.209122 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.209129 | orchestrator | 2026-04-02 00:54:02.209169 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-02 00:54:02.209176 | orchestrator | Thursday 02 April 2026 00:47:05 +0000 (0:00:00.854) 0:03:01.004 ******** 2026-04-02 00:54:02.209182 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-02 00:54:02.209189 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-02 00:54:02.209196 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-02 00:54:02.209202 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.209233 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.209241 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.209247 | orchestrator | 2026-04-02 00:54:02.209253 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-02 00:54:02.209260 | orchestrator | Thursday 02 April 2026 00:47:07 +0000 (0:00:01.236) 0:03:02.241 ******** 2026-04-02 00:54:02.209267 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-04-02 00:54:02.209273 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-04-02 00:54:02.209315 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.209321 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-04-02 00:54:02.209325 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-04-02 00:54:02.209329 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-04-02 00:54:02.209337 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-04-02 00:54:02.209341 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.209345 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.209349 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.209352 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.209356 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.209360 | orchestrator | 2026-04-02 00:54:02.209364 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-02 00:54:02.209371 | orchestrator | Thursday 02 April 2026 00:47:07 +0000 (0:00:00.696) 0:03:02.937 ******** 2026-04-02 00:54:02.209375 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.209378 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.209382 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.209386 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.209389 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.209393 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.209397 | orchestrator | 2026-04-02 00:54:02.209401 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-02 00:54:02.209405 | orchestrator | Thursday 02 April 2026 00:47:08 +0000 (0:00:01.058) 0:03:03.996 ******** 2026-04-02 00:54:02.209408 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.209412 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.209416 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.209420 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.209424 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.209427 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.209431 | orchestrator | 2026-04-02 00:54:02.209435 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-02 00:54:02.209439 | orchestrator | Thursday 02 April 2026 00:47:09 +0000 (0:00:00.961) 0:03:04.957 ******** 2026-04-02 00:54:02.209443 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.209446 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.209450 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.209454 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.209457 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.209461 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.209465 | orchestrator | 2026-04-02 00:54:02.209469 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-02 00:54:02.209473 | orchestrator | Thursday 02 April 2026 00:47:10 +0000 (0:00:01.027) 0:03:05.984 ******** 2026-04-02 00:54:02.209476 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.209480 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.209484 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.209488 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.209491 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.209495 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.209499 | orchestrator | 2026-04-02 00:54:02.209503 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-02 00:54:02.209523 | orchestrator | Thursday 02 April 2026 00:47:11 +0000 (0:00:01.015) 0:03:07.000 ******** 2026-04-02 00:54:02.209527 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.209531 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.209535 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.209539 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.209542 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.209546 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.209550 | orchestrator | 2026-04-02 00:54:02.209554 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-02 00:54:02.209561 | orchestrator | Thursday 02 April 2026 00:47:12 +0000 (0:00:01.013) 0:03:08.014 ******** 2026-04-02 00:54:02.209564 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:54:02.209568 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:54:02.209572 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.209576 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.209579 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.209583 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:54:02.209587 | orchestrator | 2026-04-02 00:54:02.209591 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-02 00:54:02.209594 | orchestrator | Thursday 02 April 2026 00:47:13 +0000 (0:00:01.119) 0:03:09.134 ******** 2026-04-02 00:54:02.209598 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-02 00:54:02.209602 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-02 00:54:02.209606 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-02 00:54:02.209610 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.209613 | orchestrator | 2026-04-02 00:54:02.209617 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-02 00:54:02.209621 | orchestrator | Thursday 02 April 2026 00:47:14 +0000 (0:00:00.680) 0:03:09.814 ******** 2026-04-02 00:54:02.209625 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-02 00:54:02.209628 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-02 00:54:02.209632 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-02 00:54:02.209636 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.209639 | orchestrator | 2026-04-02 00:54:02.209643 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-02 00:54:02.209647 | orchestrator | Thursday 02 April 2026 00:47:15 +0000 (0:00:00.762) 0:03:10.576 ******** 2026-04-02 00:54:02.209651 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-02 00:54:02.209654 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-02 00:54:02.209658 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-02 00:54:02.209662 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.209665 | orchestrator | 2026-04-02 00:54:02.209669 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-02 00:54:02.209673 | orchestrator | Thursday 02 April 2026 00:47:16 +0000 (0:00:01.078) 0:03:11.655 ******** 2026-04-02 00:54:02.209677 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:54:02.209680 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:54:02.209684 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:54:02.209688 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.209692 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.209695 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.209699 | orchestrator | 2026-04-02 00:54:02.209703 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-02 00:54:02.209707 | orchestrator | Thursday 02 April 2026 00:47:17 +0000 (0:00:00.842) 0:03:12.498 ******** 2026-04-02 00:54:02.209710 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-02 00:54:02.209714 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-02 00:54:02.209718 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-02 00:54:02.209722 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-04-02 00:54:02.209726 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.209729 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-04-02 00:54:02.209733 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.209737 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-04-02 00:54:02.209741 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.209744 | orchestrator | 2026-04-02 00:54:02.209748 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-02 00:54:02.209752 | orchestrator | Thursday 02 April 2026 00:47:19 +0000 (0:00:02.454) 0:03:14.953 ******** 2026-04-02 00:54:02.209761 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:54:02.209765 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:54:02.209768 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:54:02.209772 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:54:02.209776 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:54:02.209779 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:54:02.209783 | orchestrator | 2026-04-02 00:54:02.209787 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-02 00:54:02.209791 | orchestrator | Thursday 02 April 2026 00:47:22 +0000 (0:00:03.167) 0:03:18.120 ******** 2026-04-02 00:54:02.209794 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:54:02.209798 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:54:02.209802 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:54:02.209806 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:54:02.209809 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:54:02.209813 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:54:02.209817 | orchestrator | 2026-04-02 00:54:02.209820 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-04-02 00:54:02.209824 | orchestrator | Thursday 02 April 2026 00:47:23 +0000 (0:00:01.049) 0:03:19.170 ******** 2026-04-02 00:54:02.209828 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.209832 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.209835 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.209839 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 00:54:02.209843 | orchestrator | 2026-04-02 00:54:02.209847 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-04-02 00:54:02.209862 | orchestrator | Thursday 02 April 2026 00:47:24 +0000 (0:00:00.998) 0:03:20.168 ******** 2026-04-02 00:54:02.209866 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:54:02.209870 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:54:02.209874 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:54:02.209878 | orchestrator | 2026-04-02 00:54:02.209881 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-04-02 00:54:02.209903 | orchestrator | Thursday 02 April 2026 00:47:25 +0000 (0:00:00.393) 0:03:20.562 ******** 2026-04-02 00:54:02.209907 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:54:02.209910 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:54:02.209914 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:54:02.209918 | orchestrator | 2026-04-02 00:54:02.209922 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-04-02 00:54:02.209925 | orchestrator | Thursday 02 April 2026 00:47:26 +0000 (0:00:01.365) 0:03:21.927 ******** 2026-04-02 00:54:02.209929 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-02 00:54:02.209933 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-02 00:54:02.209937 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-02 00:54:02.209940 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.209944 | orchestrator | 2026-04-02 00:54:02.209948 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-04-02 00:54:02.209952 | orchestrator | Thursday 02 April 2026 00:47:27 +0000 (0:00:00.708) 0:03:22.636 ******** 2026-04-02 00:54:02.209955 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:54:02.209959 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:54:02.209963 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:54:02.209967 | orchestrator | 2026-04-02 00:54:02.209970 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-04-02 00:54:02.209974 | orchestrator | Thursday 02 April 2026 00:47:27 +0000 (0:00:00.447) 0:03:23.084 ******** 2026-04-02 00:54:02.209978 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.209982 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.209985 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.209989 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-02 00:54:02.209995 | orchestrator | 2026-04-02 00:54:02.209999 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-04-02 00:54:02.210003 | orchestrator | Thursday 02 April 2026 00:47:28 +0000 (0:00:00.929) 0:03:24.013 ******** 2026-04-02 00:54:02.210007 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-02 00:54:02.210010 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-02 00:54:02.210033 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-02 00:54:02.210038 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.210041 | orchestrator | 2026-04-02 00:54:02.210045 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-04-02 00:54:02.210049 | orchestrator | Thursday 02 April 2026 00:47:29 +0000 (0:00:00.407) 0:03:24.421 ******** 2026-04-02 00:54:02.210053 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.210057 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.210060 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.210064 | orchestrator | 2026-04-02 00:54:02.210068 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-04-02 00:54:02.210072 | orchestrator | Thursday 02 April 2026 00:47:29 +0000 (0:00:00.551) 0:03:24.972 ******** 2026-04-02 00:54:02.210076 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.210104 | orchestrator | 2026-04-02 00:54:02.210108 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-04-02 00:54:02.210112 | orchestrator | Thursday 02 April 2026 00:47:30 +0000 (0:00:00.271) 0:03:25.244 ******** 2026-04-02 00:54:02.210118 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.210122 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.210126 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.210130 | orchestrator | 2026-04-02 00:54:02.210150 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-04-02 00:54:02.210156 | orchestrator | Thursday 02 April 2026 00:47:30 +0000 (0:00:00.386) 0:03:25.630 ******** 2026-04-02 00:54:02.210163 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.210167 | orchestrator | 2026-04-02 00:54:02.210171 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-04-02 00:54:02.210175 | orchestrator | Thursday 02 April 2026 00:47:30 +0000 (0:00:00.209) 0:03:25.840 ******** 2026-04-02 00:54:02.210179 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.210182 | orchestrator | 2026-04-02 00:54:02.210186 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-04-02 00:54:02.210190 | orchestrator | Thursday 02 April 2026 00:47:30 +0000 (0:00:00.210) 0:03:26.050 ******** 2026-04-02 00:54:02.210194 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.210197 | orchestrator | 2026-04-02 00:54:02.210201 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-04-02 00:54:02.210205 | orchestrator | Thursday 02 April 2026 00:47:30 +0000 (0:00:00.107) 0:03:26.158 ******** 2026-04-02 00:54:02.210209 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.210212 | orchestrator | 2026-04-02 00:54:02.210216 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-04-02 00:54:02.210220 | orchestrator | Thursday 02 April 2026 00:47:31 +0000 (0:00:00.194) 0:03:26.353 ******** 2026-04-02 00:54:02.210224 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.210227 | orchestrator | 2026-04-02 00:54:02.210231 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-04-02 00:54:02.210235 | orchestrator | Thursday 02 April 2026 00:47:31 +0000 (0:00:00.189) 0:03:26.542 ******** 2026-04-02 00:54:02.210239 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-02 00:54:02.210243 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-02 00:54:02.210246 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-02 00:54:02.210250 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.210257 | orchestrator | 2026-04-02 00:54:02.210261 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-04-02 00:54:02.210280 | orchestrator | Thursday 02 April 2026 00:47:32 +0000 (0:00:00.684) 0:03:27.226 ******** 2026-04-02 00:54:02.210285 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.210288 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.210292 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.210296 | orchestrator | 2026-04-02 00:54:02.210300 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-04-02 00:54:02.210303 | orchestrator | Thursday 02 April 2026 00:47:32 +0000 (0:00:00.531) 0:03:27.758 ******** 2026-04-02 00:54:02.210307 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.210311 | orchestrator | 2026-04-02 00:54:02.210314 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-04-02 00:54:02.210318 | orchestrator | Thursday 02 April 2026 00:47:32 +0000 (0:00:00.231) 0:03:27.989 ******** 2026-04-02 00:54:02.210322 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.210326 | orchestrator | 2026-04-02 00:54:02.210329 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-04-02 00:54:02.210333 | orchestrator | Thursday 02 April 2026 00:47:32 +0000 (0:00:00.226) 0:03:28.215 ******** 2026-04-02 00:54:02.210337 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.210341 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.210344 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.210348 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-02 00:54:02.210352 | orchestrator | 2026-04-02 00:54:02.210356 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-04-02 00:54:02.210359 | orchestrator | Thursday 02 April 2026 00:47:33 +0000 (0:00:01.003) 0:03:29.218 ******** 2026-04-02 00:54:02.210363 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:54:02.210367 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:54:02.210371 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:54:02.210374 | orchestrator | 2026-04-02 00:54:02.210378 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-04-02 00:54:02.210382 | orchestrator | Thursday 02 April 2026 00:47:34 +0000 (0:00:00.307) 0:03:29.526 ******** 2026-04-02 00:54:02.210386 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:54:02.210389 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:54:02.210393 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:54:02.210397 | orchestrator | 2026-04-02 00:54:02.210401 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-04-02 00:54:02.210404 | orchestrator | Thursday 02 April 2026 00:47:35 +0000 (0:00:01.118) 0:03:30.644 ******** 2026-04-02 00:54:02.210408 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-02 00:54:02.210412 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-02 00:54:02.210416 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-02 00:54:02.210419 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.210423 | orchestrator | 2026-04-02 00:54:02.210427 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-04-02 00:54:02.210431 | orchestrator | Thursday 02 April 2026 00:47:36 +0000 (0:00:00.854) 0:03:31.499 ******** 2026-04-02 00:54:02.210434 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:54:02.210438 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:54:02.210442 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:54:02.210446 | orchestrator | 2026-04-02 00:54:02.210449 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-04-02 00:54:02.210453 | orchestrator | Thursday 02 April 2026 00:47:36 +0000 (0:00:00.324) 0:03:31.823 ******** 2026-04-02 00:54:02.210457 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.210461 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.210464 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.210471 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-02 00:54:02.210478 | orchestrator | 2026-04-02 00:54:02.210481 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-04-02 00:54:02.210485 | orchestrator | Thursday 02 April 2026 00:47:37 +0000 (0:00:01.045) 0:03:32.868 ******** 2026-04-02 00:54:02.210489 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:54:02.210493 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:54:02.210496 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:54:02.210500 | orchestrator | 2026-04-02 00:54:02.210504 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-04-02 00:54:02.210508 | orchestrator | Thursday 02 April 2026 00:47:37 +0000 (0:00:00.216) 0:03:33.085 ******** 2026-04-02 00:54:02.210511 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:54:02.210515 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:54:02.210519 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:54:02.210523 | orchestrator | 2026-04-02 00:54:02.210527 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-04-02 00:54:02.210531 | orchestrator | Thursday 02 April 2026 00:47:39 +0000 (0:00:01.148) 0:03:34.234 ******** 2026-04-02 00:54:02.210534 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-02 00:54:02.210538 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-02 00:54:02.210542 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-02 00:54:02.210545 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.210549 | orchestrator | 2026-04-02 00:54:02.210553 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-04-02 00:54:02.210557 | orchestrator | Thursday 02 April 2026 00:47:39 +0000 (0:00:00.532) 0:03:34.767 ******** 2026-04-02 00:54:02.210561 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:54:02.210564 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:54:02.210568 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:54:02.210572 | orchestrator | 2026-04-02 00:54:02.210575 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-04-02 00:54:02.210579 | orchestrator | Thursday 02 April 2026 00:47:39 +0000 (0:00:00.270) 0:03:35.037 ******** 2026-04-02 00:54:02.210583 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.210587 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.210590 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.210594 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.210598 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.210613 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.210617 | orchestrator | 2026-04-02 00:54:02.210621 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-04-02 00:54:02.210625 | orchestrator | Thursday 02 April 2026 00:47:40 +0000 (0:00:00.553) 0:03:35.591 ******** 2026-04-02 00:54:02.210629 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.210632 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.210636 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.210640 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 00:54:02.210644 | orchestrator | 2026-04-02 00:54:02.210647 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-04-02 00:54:02.210651 | orchestrator | Thursday 02 April 2026 00:47:41 +0000 (0:00:00.932) 0:03:36.523 ******** 2026-04-02 00:54:02.210655 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:54:02.210659 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:54:02.210663 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:54:02.210666 | orchestrator | 2026-04-02 00:54:02.210670 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-04-02 00:54:02.210674 | orchestrator | Thursday 02 April 2026 00:47:41 +0000 (0:00:00.284) 0:03:36.808 ******** 2026-04-02 00:54:02.210678 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:54:02.210682 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:54:02.210689 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:54:02.210692 | orchestrator | 2026-04-02 00:54:02.210696 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-04-02 00:54:02.210700 | orchestrator | Thursday 02 April 2026 00:47:43 +0000 (0:00:01.563) 0:03:38.372 ******** 2026-04-02 00:54:02.210704 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-02 00:54:02.210707 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-02 00:54:02.210711 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-02 00:54:02.210715 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.210719 | orchestrator | 2026-04-02 00:54:02.210722 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-04-02 00:54:02.210726 | orchestrator | Thursday 02 April 2026 00:47:43 +0000 (0:00:00.594) 0:03:38.966 ******** 2026-04-02 00:54:02.210730 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:54:02.210734 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:54:02.210737 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:54:02.210741 | orchestrator | 2026-04-02 00:54:02.210745 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-04-02 00:54:02.210749 | orchestrator | 2026-04-02 00:54:02.210752 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-02 00:54:02.210756 | orchestrator | Thursday 02 April 2026 00:47:44 +0000 (0:00:00.582) 0:03:39.549 ******** 2026-04-02 00:54:02.210760 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 00:54:02.210764 | orchestrator | 2026-04-02 00:54:02.210768 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-02 00:54:02.210771 | orchestrator | Thursday 02 April 2026 00:47:45 +0000 (0:00:00.694) 0:03:40.243 ******** 2026-04-02 00:54:02.210775 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 00:54:02.210779 | orchestrator | 2026-04-02 00:54:02.210783 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-02 00:54:02.210787 | orchestrator | Thursday 02 April 2026 00:47:45 +0000 (0:00:00.634) 0:03:40.877 ******** 2026-04-02 00:54:02.210792 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:54:02.210796 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:54:02.210800 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:54:02.210804 | orchestrator | 2026-04-02 00:54:02.210808 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-02 00:54:02.210811 | orchestrator | Thursday 02 April 2026 00:47:46 +0000 (0:00:00.645) 0:03:41.523 ******** 2026-04-02 00:54:02.210815 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.210819 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.210823 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.210827 | orchestrator | 2026-04-02 00:54:02.210830 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-02 00:54:02.210834 | orchestrator | Thursday 02 April 2026 00:47:46 +0000 (0:00:00.574) 0:03:42.097 ******** 2026-04-02 00:54:02.210838 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.210842 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.210845 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.210849 | orchestrator | 2026-04-02 00:54:02.210853 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-02 00:54:02.210857 | orchestrator | Thursday 02 April 2026 00:47:47 +0000 (0:00:00.322) 0:03:42.420 ******** 2026-04-02 00:54:02.210860 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.210864 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.210868 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.210872 | orchestrator | 2026-04-02 00:54:02.210876 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-02 00:54:02.210879 | orchestrator | Thursday 02 April 2026 00:47:47 +0000 (0:00:00.366) 0:03:42.787 ******** 2026-04-02 00:54:02.210885 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:54:02.210889 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:54:02.210893 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:54:02.210897 | orchestrator | 2026-04-02 00:54:02.210900 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-02 00:54:02.210904 | orchestrator | Thursday 02 April 2026 00:47:48 +0000 (0:00:00.922) 0:03:43.710 ******** 2026-04-02 00:54:02.210908 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.210912 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.210916 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.210919 | orchestrator | 2026-04-02 00:54:02.210923 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-02 00:54:02.210927 | orchestrator | Thursday 02 April 2026 00:47:48 +0000 (0:00:00.259) 0:03:43.969 ******** 2026-04-02 00:54:02.210942 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.210946 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.210950 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.210954 | orchestrator | 2026-04-02 00:54:02.210958 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-02 00:54:02.210961 | orchestrator | Thursday 02 April 2026 00:47:49 +0000 (0:00:00.361) 0:03:44.331 ******** 2026-04-02 00:54:02.210965 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:54:02.210969 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:54:02.210973 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:54:02.210977 | orchestrator | 2026-04-02 00:54:02.210980 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-02 00:54:02.210984 | orchestrator | Thursday 02 April 2026 00:47:49 +0000 (0:00:00.591) 0:03:44.923 ******** 2026-04-02 00:54:02.210988 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:54:02.210992 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:54:02.210995 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:54:02.210999 | orchestrator | 2026-04-02 00:54:02.211003 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-02 00:54:02.211007 | orchestrator | Thursday 02 April 2026 00:47:50 +0000 (0:00:00.608) 0:03:45.531 ******** 2026-04-02 00:54:02.211010 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.211014 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.211018 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.211022 | orchestrator | 2026-04-02 00:54:02.211026 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-02 00:54:02.211029 | orchestrator | Thursday 02 April 2026 00:47:50 +0000 (0:00:00.272) 0:03:45.804 ******** 2026-04-02 00:54:02.211033 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:54:02.211037 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:54:02.211041 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:54:02.211044 | orchestrator | 2026-04-02 00:54:02.211048 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-02 00:54:02.211052 | orchestrator | Thursday 02 April 2026 00:47:51 +0000 (0:00:00.421) 0:03:46.226 ******** 2026-04-02 00:54:02.211056 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.211059 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.211063 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.211067 | orchestrator | 2026-04-02 00:54:02.211070 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-02 00:54:02.211074 | orchestrator | Thursday 02 April 2026 00:47:51 +0000 (0:00:00.250) 0:03:46.476 ******** 2026-04-02 00:54:02.211078 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.211082 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.211085 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.211089 | orchestrator | 2026-04-02 00:54:02.211093 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-02 00:54:02.211097 | orchestrator | Thursday 02 April 2026 00:47:51 +0000 (0:00:00.263) 0:03:46.740 ******** 2026-04-02 00:54:02.211100 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.211104 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.211111 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.211114 | orchestrator | 2026-04-02 00:54:02.211118 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-02 00:54:02.211122 | orchestrator | Thursday 02 April 2026 00:47:51 +0000 (0:00:00.264) 0:03:47.004 ******** 2026-04-02 00:54:02.211126 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.211129 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.211144 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.211148 | orchestrator | 2026-04-02 00:54:02.211152 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-02 00:54:02.211156 | orchestrator | Thursday 02 April 2026 00:47:52 +0000 (0:00:00.433) 0:03:47.438 ******** 2026-04-02 00:54:02.211160 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.211166 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.211170 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.211174 | orchestrator | 2026-04-02 00:54:02.211177 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-02 00:54:02.211181 | orchestrator | Thursday 02 April 2026 00:47:52 +0000 (0:00:00.260) 0:03:47.698 ******** 2026-04-02 00:54:02.211185 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:54:02.211189 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:54:02.211192 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:54:02.211196 | orchestrator | 2026-04-02 00:54:02.211200 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-02 00:54:02.211204 | orchestrator | Thursday 02 April 2026 00:47:52 +0000 (0:00:00.286) 0:03:47.985 ******** 2026-04-02 00:54:02.211207 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:54:02.211211 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:54:02.211215 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:54:02.211218 | orchestrator | 2026-04-02 00:54:02.211222 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-02 00:54:02.211226 | orchestrator | Thursday 02 April 2026 00:47:53 +0000 (0:00:00.261) 0:03:48.247 ******** 2026-04-02 00:54:02.211230 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:54:02.211233 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:54:02.211237 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:54:02.211241 | orchestrator | 2026-04-02 00:54:02.211245 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-04-02 00:54:02.211248 | orchestrator | Thursday 02 April 2026 00:47:53 +0000 (0:00:00.616) 0:03:48.863 ******** 2026-04-02 00:54:02.211252 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:54:02.211256 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:54:02.211260 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:54:02.211263 | orchestrator | 2026-04-02 00:54:02.211267 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-04-02 00:54:02.211271 | orchestrator | Thursday 02 April 2026 00:47:53 +0000 (0:00:00.283) 0:03:49.147 ******** 2026-04-02 00:54:02.211275 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 00:54:02.211278 | orchestrator | 2026-04-02 00:54:02.211282 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-04-02 00:54:02.211286 | orchestrator | Thursday 02 April 2026 00:47:54 +0000 (0:00:00.445) 0:03:49.592 ******** 2026-04-02 00:54:02.211290 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.211294 | orchestrator | 2026-04-02 00:54:02.211310 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-04-02 00:54:02.211315 | orchestrator | Thursday 02 April 2026 00:47:54 +0000 (0:00:00.253) 0:03:49.846 ******** 2026-04-02 00:54:02.211319 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-02 00:54:02.211322 | orchestrator | 2026-04-02 00:54:02.211326 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-04-02 00:54:02.211330 | orchestrator | Thursday 02 April 2026 00:47:55 +0000 (0:00:00.933) 0:03:50.779 ******** 2026-04-02 00:54:02.211334 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:54:02.211340 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:54:02.211344 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:54:02.211348 | orchestrator | 2026-04-02 00:54:02.211352 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-04-02 00:54:02.211356 | orchestrator | Thursday 02 April 2026 00:47:55 +0000 (0:00:00.264) 0:03:51.043 ******** 2026-04-02 00:54:02.211359 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:54:02.211363 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:54:02.211367 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:54:02.211371 | orchestrator | 2026-04-02 00:54:02.211375 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-04-02 00:54:02.211378 | orchestrator | Thursday 02 April 2026 00:47:56 +0000 (0:00:00.322) 0:03:51.366 ******** 2026-04-02 00:54:02.211382 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:54:02.211386 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:54:02.211390 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:54:02.211393 | orchestrator | 2026-04-02 00:54:02.211397 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-04-02 00:54:02.211401 | orchestrator | Thursday 02 April 2026 00:47:57 +0000 (0:00:01.478) 0:03:52.845 ******** 2026-04-02 00:54:02.211405 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:54:02.211409 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:54:02.211412 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:54:02.211416 | orchestrator | 2026-04-02 00:54:02.211420 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-04-02 00:54:02.211424 | orchestrator | Thursday 02 April 2026 00:47:58 +0000 (0:00:00.891) 0:03:53.736 ******** 2026-04-02 00:54:02.211427 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:54:02.211431 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:54:02.211435 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:54:02.211439 | orchestrator | 2026-04-02 00:54:02.211442 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-04-02 00:54:02.211446 | orchestrator | Thursday 02 April 2026 00:47:59 +0000 (0:00:00.717) 0:03:54.454 ******** 2026-04-02 00:54:02.211450 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:54:02.211454 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:54:02.211458 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:54:02.211461 | orchestrator | 2026-04-02 00:54:02.211465 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-04-02 00:54:02.211469 | orchestrator | Thursday 02 April 2026 00:47:59 +0000 (0:00:00.597) 0:03:55.051 ******** 2026-04-02 00:54:02.211473 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:54:02.211476 | orchestrator | 2026-04-02 00:54:02.211480 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-04-02 00:54:02.211484 | orchestrator | Thursday 02 April 2026 00:48:01 +0000 (0:00:01.239) 0:03:56.291 ******** 2026-04-02 00:54:02.211488 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:54:02.211492 | orchestrator | 2026-04-02 00:54:02.211495 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-04-02 00:54:02.211499 | orchestrator | Thursday 02 April 2026 00:48:01 +0000 (0:00:00.660) 0:03:56.952 ******** 2026-04-02 00:54:02.211503 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-02 00:54:02.211507 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-02 00:54:02.211513 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-02 00:54:02.211516 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-02 00:54:02.211520 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-04-02 00:54:02.211524 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-02 00:54:02.211528 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-02 00:54:02.211532 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-04-02 00:54:02.211535 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-02 00:54:02.211542 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-04-02 00:54:02.211545 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-04-02 00:54:02.211549 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-04-02 00:54:02.211553 | orchestrator | 2026-04-02 00:54:02.211557 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-04-02 00:54:02.211560 | orchestrator | Thursday 02 April 2026 00:48:04 +0000 (0:00:03.002) 0:03:59.955 ******** 2026-04-02 00:54:02.211564 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:54:02.211568 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:54:02.211572 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:54:02.211575 | orchestrator | 2026-04-02 00:54:02.211579 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-04-02 00:54:02.211583 | orchestrator | Thursday 02 April 2026 00:48:05 +0000 (0:00:01.233) 0:04:01.189 ******** 2026-04-02 00:54:02.211587 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:54:02.211591 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:54:02.211594 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:54:02.211598 | orchestrator | 2026-04-02 00:54:02.211602 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-04-02 00:54:02.211606 | orchestrator | Thursday 02 April 2026 00:48:06 +0000 (0:00:00.281) 0:04:01.470 ******** 2026-04-02 00:54:02.211609 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:54:02.211613 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:54:02.211617 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:54:02.211620 | orchestrator | 2026-04-02 00:54:02.211624 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-04-02 00:54:02.211628 | orchestrator | Thursday 02 April 2026 00:48:06 +0000 (0:00:00.274) 0:04:01.745 ******** 2026-04-02 00:54:02.211632 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:54:02.211648 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:54:02.211652 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:54:02.211656 | orchestrator | 2026-04-02 00:54:02.211660 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-04-02 00:54:02.211664 | orchestrator | Thursday 02 April 2026 00:48:07 +0000 (0:00:01.415) 0:04:03.161 ******** 2026-04-02 00:54:02.211667 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:54:02.211671 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:54:02.211675 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:54:02.211679 | orchestrator | 2026-04-02 00:54:02.211682 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-04-02 00:54:02.211686 | orchestrator | Thursday 02 April 2026 00:48:09 +0000 (0:00:01.572) 0:04:04.733 ******** 2026-04-02 00:54:02.211690 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.211694 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.211697 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.211701 | orchestrator | 2026-04-02 00:54:02.211705 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-04-02 00:54:02.211709 | orchestrator | Thursday 02 April 2026 00:48:09 +0000 (0:00:00.339) 0:04:05.073 ******** 2026-04-02 00:54:02.211713 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 00:54:02.211716 | orchestrator | 2026-04-02 00:54:02.211720 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-04-02 00:54:02.211724 | orchestrator | Thursday 02 April 2026 00:48:10 +0000 (0:00:00.651) 0:04:05.725 ******** 2026-04-02 00:54:02.211728 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.211731 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.211735 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.211739 | orchestrator | 2026-04-02 00:54:02.211743 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-04-02 00:54:02.211746 | orchestrator | Thursday 02 April 2026 00:48:10 +0000 (0:00:00.439) 0:04:06.164 ******** 2026-04-02 00:54:02.211750 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.211754 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.211762 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.211766 | orchestrator | 2026-04-02 00:54:02.211770 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-04-02 00:54:02.211773 | orchestrator | Thursday 02 April 2026 00:48:11 +0000 (0:00:00.282) 0:04:06.446 ******** 2026-04-02 00:54:02.211777 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 00:54:02.211781 | orchestrator | 2026-04-02 00:54:02.211785 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-04-02 00:54:02.211788 | orchestrator | Thursday 02 April 2026 00:48:11 +0000 (0:00:00.501) 0:04:06.948 ******** 2026-04-02 00:54:02.211792 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:54:02.211796 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:54:02.211800 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:54:02.211803 | orchestrator | 2026-04-02 00:54:02.211807 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-04-02 00:54:02.211811 | orchestrator | Thursday 02 April 2026 00:48:14 +0000 (0:00:02.910) 0:04:09.858 ******** 2026-04-02 00:54:02.211815 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:54:02.211818 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:54:02.211822 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:54:02.211826 | orchestrator | 2026-04-02 00:54:02.211830 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-04-02 00:54:02.211834 | orchestrator | Thursday 02 April 2026 00:48:16 +0000 (0:00:01.580) 0:04:11.440 ******** 2026-04-02 00:54:02.211840 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:54:02.211844 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:54:02.211847 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:54:02.211851 | orchestrator | 2026-04-02 00:54:02.211855 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-04-02 00:54:02.211859 | orchestrator | Thursday 02 April 2026 00:48:18 +0000 (0:00:01.889) 0:04:13.329 ******** 2026-04-02 00:54:02.211862 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:54:02.211866 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:54:02.211870 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:54:02.211874 | orchestrator | 2026-04-02 00:54:02.211877 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-04-02 00:54:02.211881 | orchestrator | Thursday 02 April 2026 00:48:20 +0000 (0:00:02.222) 0:04:15.552 ******** 2026-04-02 00:54:02.211885 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 00:54:02.211889 | orchestrator | 2026-04-02 00:54:02.211892 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-04-02 00:54:02.211896 | orchestrator | Thursday 02 April 2026 00:48:20 +0000 (0:00:00.621) 0:04:16.173 ******** 2026-04-02 00:54:02.211900 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-04-02 00:54:02.211904 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:54:02.211907 | orchestrator | 2026-04-02 00:54:02.211911 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-04-02 00:54:02.211915 | orchestrator | Thursday 02 April 2026 00:48:42 +0000 (0:00:21.507) 0:04:37.681 ******** 2026-04-02 00:54:02.211919 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:54:02.211922 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:54:02.211926 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:54:02.211930 | orchestrator | 2026-04-02 00:54:02.211934 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-04-02 00:54:02.211938 | orchestrator | Thursday 02 April 2026 00:48:47 +0000 (0:00:05.395) 0:04:43.076 ******** 2026-04-02 00:54:02.211941 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.211945 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.211949 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.211952 | orchestrator | 2026-04-02 00:54:02.211956 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-04-02 00:54:02.211976 | orchestrator | Thursday 02 April 2026 00:48:48 +0000 (0:00:00.287) 0:04:43.364 ******** 2026-04-02 00:54:02.211981 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__ba2b0073da3374904e90618c2f7fe822b55a5e5f'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-04-02 00:54:02.211987 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__ba2b0073da3374904e90618c2f7fe822b55a5e5f'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-04-02 00:54:02.211992 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__ba2b0073da3374904e90618c2f7fe822b55a5e5f'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-04-02 00:54:02.211997 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__ba2b0073da3374904e90618c2f7fe822b55a5e5f'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-04-02 00:54:02.212001 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__ba2b0073da3374904e90618c2f7fe822b55a5e5f'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-04-02 00:54:02.212005 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__ba2b0073da3374904e90618c2f7fe822b55a5e5f'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__ba2b0073da3374904e90618c2f7fe822b55a5e5f'}])  2026-04-02 00:54:02.212010 | orchestrator | 2026-04-02 00:54:02.212014 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-02 00:54:02.212020 | orchestrator | Thursday 02 April 2026 00:48:58 +0000 (0:00:10.601) 0:04:53.965 ******** 2026-04-02 00:54:02.212024 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.212028 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.212032 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.212035 | orchestrator | 2026-04-02 00:54:02.212039 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-04-02 00:54:02.212043 | orchestrator | Thursday 02 April 2026 00:48:59 +0000 (0:00:00.351) 0:04:54.316 ******** 2026-04-02 00:54:02.212046 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 00:54:02.212050 | orchestrator | 2026-04-02 00:54:02.212054 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-04-02 00:54:02.212058 | orchestrator | Thursday 02 April 2026 00:48:59 +0000 (0:00:00.714) 0:04:55.030 ******** 2026-04-02 00:54:02.212062 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:54:02.212065 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:54:02.212069 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:54:02.212073 | orchestrator | 2026-04-02 00:54:02.212077 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-04-02 00:54:02.212083 | orchestrator | Thursday 02 April 2026 00:49:00 +0000 (0:00:00.304) 0:04:55.335 ******** 2026-04-02 00:54:02.212086 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.212090 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.212094 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.212098 | orchestrator | 2026-04-02 00:54:02.212101 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-04-02 00:54:02.212105 | orchestrator | Thursday 02 April 2026 00:49:00 +0000 (0:00:00.328) 0:04:55.664 ******** 2026-04-02 00:54:02.212109 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-02 00:54:02.212113 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-02 00:54:02.212116 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-02 00:54:02.212120 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.212124 | orchestrator | 2026-04-02 00:54:02.212128 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-04-02 00:54:02.212143 | orchestrator | Thursday 02 April 2026 00:49:01 +0000 (0:00:00.581) 0:04:56.245 ******** 2026-04-02 00:54:02.212147 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:54:02.212151 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:54:02.212168 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:54:02.212172 | orchestrator | 2026-04-02 00:54:02.212176 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-04-02 00:54:02.212179 | orchestrator | 2026-04-02 00:54:02.212183 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-02 00:54:02.212187 | orchestrator | Thursday 02 April 2026 00:49:01 +0000 (0:00:00.675) 0:04:56.920 ******** 2026-04-02 00:54:02.212191 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 00:54:02.212195 | orchestrator | 2026-04-02 00:54:02.212198 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-02 00:54:02.212202 | orchestrator | Thursday 02 April 2026 00:49:02 +0000 (0:00:00.435) 0:04:57.356 ******** 2026-04-02 00:54:02.212206 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 00:54:02.212210 | orchestrator | 2026-04-02 00:54:02.212214 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-02 00:54:02.212217 | orchestrator | Thursday 02 April 2026 00:49:02 +0000 (0:00:00.450) 0:04:57.807 ******** 2026-04-02 00:54:02.212221 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:54:02.212225 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:54:02.212229 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:54:02.212232 | orchestrator | 2026-04-02 00:54:02.212236 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-02 00:54:02.212240 | orchestrator | Thursday 02 April 2026 00:49:03 +0000 (0:00:00.815) 0:04:58.622 ******** 2026-04-02 00:54:02.212244 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.212247 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.212251 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.212255 | orchestrator | 2026-04-02 00:54:02.212258 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-02 00:54:02.212262 | orchestrator | Thursday 02 April 2026 00:49:03 +0000 (0:00:00.257) 0:04:58.880 ******** 2026-04-02 00:54:02.212266 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.212270 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.212273 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.212277 | orchestrator | 2026-04-02 00:54:02.212281 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-02 00:54:02.212285 | orchestrator | Thursday 02 April 2026 00:49:03 +0000 (0:00:00.267) 0:04:59.147 ******** 2026-04-02 00:54:02.212288 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.212292 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.212296 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.212302 | orchestrator | 2026-04-02 00:54:02.212306 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-02 00:54:02.212310 | orchestrator | Thursday 02 April 2026 00:49:04 +0000 (0:00:00.246) 0:04:59.394 ******** 2026-04-02 00:54:02.212314 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:54:02.212317 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:54:02.212321 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:54:02.212325 | orchestrator | 2026-04-02 00:54:02.212329 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-02 00:54:02.212332 | orchestrator | Thursday 02 April 2026 00:49:05 +0000 (0:00:00.895) 0:05:00.290 ******** 2026-04-02 00:54:02.212336 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.212340 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.212344 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.212347 | orchestrator | 2026-04-02 00:54:02.212351 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-02 00:54:02.212357 | orchestrator | Thursday 02 April 2026 00:49:05 +0000 (0:00:00.258) 0:05:00.549 ******** 2026-04-02 00:54:02.212361 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.212364 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.212368 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.212372 | orchestrator | 2026-04-02 00:54:02.212375 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-02 00:54:02.212379 | orchestrator | Thursday 02 April 2026 00:49:05 +0000 (0:00:00.258) 0:05:00.807 ******** 2026-04-02 00:54:02.212383 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:54:02.212387 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:54:02.212391 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:54:02.212394 | orchestrator | 2026-04-02 00:54:02.212398 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-02 00:54:02.212402 | orchestrator | Thursday 02 April 2026 00:49:06 +0000 (0:00:00.746) 0:05:01.553 ******** 2026-04-02 00:54:02.212405 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:54:02.212409 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:54:02.212413 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:54:02.212417 | orchestrator | 2026-04-02 00:54:02.212420 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-02 00:54:02.212424 | orchestrator | Thursday 02 April 2026 00:49:07 +0000 (0:00:00.848) 0:05:02.402 ******** 2026-04-02 00:54:02.212428 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.212432 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.212435 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.212439 | orchestrator | 2026-04-02 00:54:02.212443 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-02 00:54:02.212447 | orchestrator | Thursday 02 April 2026 00:49:07 +0000 (0:00:00.254) 0:05:02.657 ******** 2026-04-02 00:54:02.212450 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:54:02.212454 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:54:02.212458 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:54:02.212462 | orchestrator | 2026-04-02 00:54:02.212465 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-02 00:54:02.212469 | orchestrator | Thursday 02 April 2026 00:49:07 +0000 (0:00:00.276) 0:05:02.933 ******** 2026-04-02 00:54:02.212473 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.212476 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.212480 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.212484 | orchestrator | 2026-04-02 00:54:02.212488 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-02 00:54:02.212504 | orchestrator | Thursday 02 April 2026 00:49:07 +0000 (0:00:00.253) 0:05:03.187 ******** 2026-04-02 00:54:02.212509 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.212512 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.212516 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.212520 | orchestrator | 2026-04-02 00:54:02.212524 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-02 00:54:02.212530 | orchestrator | Thursday 02 April 2026 00:49:08 +0000 (0:00:00.433) 0:05:03.620 ******** 2026-04-02 00:54:02.212534 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.212538 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.212541 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.212545 | orchestrator | 2026-04-02 00:54:02.212549 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-02 00:54:02.212553 | orchestrator | Thursday 02 April 2026 00:49:08 +0000 (0:00:00.274) 0:05:03.895 ******** 2026-04-02 00:54:02.212556 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.212560 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.212564 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.212567 | orchestrator | 2026-04-02 00:54:02.212571 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-02 00:54:02.212575 | orchestrator | Thursday 02 April 2026 00:49:08 +0000 (0:00:00.272) 0:05:04.167 ******** 2026-04-02 00:54:02.212579 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.212582 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.212586 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.212590 | orchestrator | 2026-04-02 00:54:02.212594 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-02 00:54:02.212597 | orchestrator | Thursday 02 April 2026 00:49:09 +0000 (0:00:00.268) 0:05:04.435 ******** 2026-04-02 00:54:02.212601 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:54:02.212605 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:54:02.212609 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:54:02.212612 | orchestrator | 2026-04-02 00:54:02.212616 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-02 00:54:02.212620 | orchestrator | Thursday 02 April 2026 00:49:09 +0000 (0:00:00.305) 0:05:04.741 ******** 2026-04-02 00:54:02.212623 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:54:02.212627 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:54:02.212631 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:54:02.212635 | orchestrator | 2026-04-02 00:54:02.212638 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-02 00:54:02.212642 | orchestrator | Thursday 02 April 2026 00:49:10 +0000 (0:00:00.569) 0:05:05.311 ******** 2026-04-02 00:54:02.212646 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:54:02.212650 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:54:02.212653 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:54:02.212657 | orchestrator | 2026-04-02 00:54:02.212661 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-04-02 00:54:02.212664 | orchestrator | Thursday 02 April 2026 00:49:10 +0000 (0:00:00.496) 0:05:05.807 ******** 2026-04-02 00:54:02.212668 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-02 00:54:02.212672 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-02 00:54:02.212676 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-02 00:54:02.212680 | orchestrator | 2026-04-02 00:54:02.212683 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-04-02 00:54:02.212687 | orchestrator | Thursday 02 April 2026 00:49:11 +0000 (0:00:00.690) 0:05:06.498 ******** 2026-04-02 00:54:02.212691 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 00:54:02.212695 | orchestrator | 2026-04-02 00:54:02.212700 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-04-02 00:54:02.212704 | orchestrator | Thursday 02 April 2026 00:49:11 +0000 (0:00:00.621) 0:05:07.120 ******** 2026-04-02 00:54:02.212708 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:54:02.212711 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:54:02.212715 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:54:02.212719 | orchestrator | 2026-04-02 00:54:02.212723 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-04-02 00:54:02.212730 | orchestrator | Thursday 02 April 2026 00:49:12 +0000 (0:00:00.669) 0:05:07.789 ******** 2026-04-02 00:54:02.212733 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.212737 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.212741 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.212745 | orchestrator | 2026-04-02 00:54:02.212748 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-04-02 00:54:02.212752 | orchestrator | Thursday 02 April 2026 00:49:12 +0000 (0:00:00.263) 0:05:08.052 ******** 2026-04-02 00:54:02.212756 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-02 00:54:02.212760 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-02 00:54:02.212763 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-02 00:54:02.212767 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-04-02 00:54:02.212771 | orchestrator | 2026-04-02 00:54:02.212774 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-04-02 00:54:02.212778 | orchestrator | Thursday 02 April 2026 00:49:21 +0000 (0:00:08.257) 0:05:16.310 ******** 2026-04-02 00:54:02.212782 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:54:02.212786 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:54:02.212789 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:54:02.212793 | orchestrator | 2026-04-02 00:54:02.212797 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-04-02 00:54:02.212801 | orchestrator | Thursday 02 April 2026 00:49:21 +0000 (0:00:00.502) 0:05:16.812 ******** 2026-04-02 00:54:02.212804 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-04-02 00:54:02.212808 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-02 00:54:02.212812 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-02 00:54:02.212816 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-02 00:54:02.212819 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-02 00:54:02.212834 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-02 00:54:02.212839 | orchestrator | 2026-04-02 00:54:02.212843 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-04-02 00:54:02.212846 | orchestrator | Thursday 02 April 2026 00:49:23 +0000 (0:00:01.536) 0:05:18.349 ******** 2026-04-02 00:54:02.212850 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-04-02 00:54:02.212854 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-02 00:54:02.212858 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-02 00:54:02.212861 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-02 00:54:02.212865 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-04-02 00:54:02.212869 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-04-02 00:54:02.212873 | orchestrator | 2026-04-02 00:54:02.212876 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-04-02 00:54:02.212880 | orchestrator | Thursday 02 April 2026 00:49:24 +0000 (0:00:01.139) 0:05:19.488 ******** 2026-04-02 00:54:02.212884 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:54:02.212888 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:54:02.212891 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:54:02.212895 | orchestrator | 2026-04-02 00:54:02.212899 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-04-02 00:54:02.212903 | orchestrator | Thursday 02 April 2026 00:49:24 +0000 (0:00:00.729) 0:05:20.218 ******** 2026-04-02 00:54:02.212906 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.212910 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.212914 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.212917 | orchestrator | 2026-04-02 00:54:02.212921 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-04-02 00:54:02.212925 | orchestrator | Thursday 02 April 2026 00:49:25 +0000 (0:00:00.417) 0:05:20.635 ******** 2026-04-02 00:54:02.212929 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.212937 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.212941 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.212945 | orchestrator | 2026-04-02 00:54:02.212948 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-04-02 00:54:02.212952 | orchestrator | Thursday 02 April 2026 00:49:25 +0000 (0:00:00.251) 0:05:20.887 ******** 2026-04-02 00:54:02.212956 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 00:54:02.212960 | orchestrator | 2026-04-02 00:54:02.212963 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-04-02 00:54:02.212967 | orchestrator | Thursday 02 April 2026 00:49:26 +0000 (0:00:00.457) 0:05:21.344 ******** 2026-04-02 00:54:02.212971 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.212974 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.212978 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.212982 | orchestrator | 2026-04-02 00:54:02.212986 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-04-02 00:54:02.213006 | orchestrator | Thursday 02 April 2026 00:49:26 +0000 (0:00:00.288) 0:05:21.633 ******** 2026-04-02 00:54:02.213011 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.213015 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.213018 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.213022 | orchestrator | 2026-04-02 00:54:02.213026 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-04-02 00:54:02.213030 | orchestrator | Thursday 02 April 2026 00:49:26 +0000 (0:00:00.418) 0:05:22.052 ******** 2026-04-02 00:54:02.213033 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 00:54:02.213037 | orchestrator | 2026-04-02 00:54:02.213043 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-04-02 00:54:02.213047 | orchestrator | Thursday 02 April 2026 00:49:27 +0000 (0:00:00.434) 0:05:22.486 ******** 2026-04-02 00:54:02.213051 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:54:02.213055 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:54:02.213058 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:54:02.213062 | orchestrator | 2026-04-02 00:54:02.213066 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-04-02 00:54:02.213069 | orchestrator | Thursday 02 April 2026 00:49:28 +0000 (0:00:01.028) 0:05:23.515 ******** 2026-04-02 00:54:02.213073 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:54:02.213077 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:54:02.213081 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:54:02.213084 | orchestrator | 2026-04-02 00:54:02.213088 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-04-02 00:54:02.213092 | orchestrator | Thursday 02 April 2026 00:49:29 +0000 (0:00:01.247) 0:05:24.762 ******** 2026-04-02 00:54:02.213096 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:54:02.213099 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:54:02.213103 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:54:02.213107 | orchestrator | 2026-04-02 00:54:02.213110 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-04-02 00:54:02.213114 | orchestrator | Thursday 02 April 2026 00:49:31 +0000 (0:00:01.790) 0:05:26.552 ******** 2026-04-02 00:54:02.213118 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:54:02.213122 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:54:02.213125 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:54:02.213129 | orchestrator | 2026-04-02 00:54:02.213172 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-04-02 00:54:02.213176 | orchestrator | Thursday 02 April 2026 00:49:33 +0000 (0:00:02.103) 0:05:28.655 ******** 2026-04-02 00:54:02.213180 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.213184 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.213188 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-04-02 00:54:02.213195 | orchestrator | 2026-04-02 00:54:02.213199 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-04-02 00:54:02.213203 | orchestrator | Thursday 02 April 2026 00:49:33 +0000 (0:00:00.419) 0:05:29.074 ******** 2026-04-02 00:54:02.213221 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-04-02 00:54:02.213226 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-04-02 00:54:02.213230 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-04-02 00:54:02.213233 | orchestrator | 2026-04-02 00:54:02.213237 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-04-02 00:54:02.213241 | orchestrator | Thursday 02 April 2026 00:49:47 +0000 (0:00:13.439) 0:05:42.514 ******** 2026-04-02 00:54:02.213245 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-04-02 00:54:02.213248 | orchestrator | 2026-04-02 00:54:02.213252 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-04-02 00:54:02.213256 | orchestrator | Thursday 02 April 2026 00:49:48 +0000 (0:00:01.204) 0:05:43.718 ******** 2026-04-02 00:54:02.213260 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:54:02.213264 | orchestrator | 2026-04-02 00:54:02.213267 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-04-02 00:54:02.213271 | orchestrator | Thursday 02 April 2026 00:49:48 +0000 (0:00:00.414) 0:05:44.132 ******** 2026-04-02 00:54:02.213275 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:54:02.213279 | orchestrator | 2026-04-02 00:54:02.213282 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-04-02 00:54:02.213286 | orchestrator | Thursday 02 April 2026 00:49:49 +0000 (0:00:00.125) 0:05:44.258 ******** 2026-04-02 00:54:02.213290 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-04-02 00:54:02.213294 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-04-02 00:54:02.213298 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-04-02 00:54:02.213301 | orchestrator | 2026-04-02 00:54:02.213305 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-04-02 00:54:02.213309 | orchestrator | Thursday 02 April 2026 00:49:54 +0000 (0:00:05.959) 0:05:50.218 ******** 2026-04-02 00:54:02.213312 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-04-02 00:54:02.213316 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-04-02 00:54:02.213320 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-04-02 00:54:02.213324 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-04-02 00:54:02.213327 | orchestrator | 2026-04-02 00:54:02.213331 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-02 00:54:02.213335 | orchestrator | Thursday 02 April 2026 00:49:59 +0000 (0:00:04.566) 0:05:54.784 ******** 2026-04-02 00:54:02.213339 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:54:02.213342 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:54:02.213346 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:54:02.213350 | orchestrator | 2026-04-02 00:54:02.213353 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-04-02 00:54:02.213357 | orchestrator | Thursday 02 April 2026 00:50:00 +0000 (0:00:00.830) 0:05:55.615 ******** 2026-04-02 00:54:02.213361 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 00:54:02.213365 | orchestrator | 2026-04-02 00:54:02.213368 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-04-02 00:54:02.213372 | orchestrator | Thursday 02 April 2026 00:50:00 +0000 (0:00:00.458) 0:05:56.073 ******** 2026-04-02 00:54:02.213378 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:54:02.213382 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:54:02.213401 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:54:02.213405 | orchestrator | 2026-04-02 00:54:02.213409 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-04-02 00:54:02.213413 | orchestrator | Thursday 02 April 2026 00:50:01 +0000 (0:00:00.274) 0:05:56.348 ******** 2026-04-02 00:54:02.213417 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:54:02.213420 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:54:02.213424 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:54:02.213428 | orchestrator | 2026-04-02 00:54:02.213431 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-04-02 00:54:02.213435 | orchestrator | Thursday 02 April 2026 00:50:02 +0000 (0:00:01.433) 0:05:57.782 ******** 2026-04-02 00:54:02.213439 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-02 00:54:02.213443 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-02 00:54:02.213446 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-02 00:54:02.213450 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.213454 | orchestrator | 2026-04-02 00:54:02.213458 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-04-02 00:54:02.213461 | orchestrator | Thursday 02 April 2026 00:50:03 +0000 (0:00:00.522) 0:05:58.304 ******** 2026-04-02 00:54:02.213465 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:54:02.213469 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:54:02.213474 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:54:02.213481 | orchestrator | 2026-04-02 00:54:02.213487 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-04-02 00:54:02.213494 | orchestrator | 2026-04-02 00:54:02.213502 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-02 00:54:02.213513 | orchestrator | Thursday 02 April 2026 00:50:03 +0000 (0:00:00.452) 0:05:58.756 ******** 2026-04-02 00:54:02.213519 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-02 00:54:02.213526 | orchestrator | 2026-04-02 00:54:02.213531 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-02 00:54:02.213538 | orchestrator | Thursday 02 April 2026 00:50:04 +0000 (0:00:00.557) 0:05:59.313 ******** 2026-04-02 00:54:02.213564 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-02 00:54:02.213571 | orchestrator | 2026-04-02 00:54:02.213576 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-02 00:54:02.213582 | orchestrator | Thursday 02 April 2026 00:50:04 +0000 (0:00:00.443) 0:05:59.757 ******** 2026-04-02 00:54:02.213588 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.213594 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.213600 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.213606 | orchestrator | 2026-04-02 00:54:02.213611 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-02 00:54:02.213617 | orchestrator | Thursday 02 April 2026 00:50:04 +0000 (0:00:00.254) 0:06:00.012 ******** 2026-04-02 00:54:02.213623 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:54:02.213629 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:54:02.213635 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:54:02.213640 | orchestrator | 2026-04-02 00:54:02.213646 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-02 00:54:02.213652 | orchestrator | Thursday 02 April 2026 00:50:05 +0000 (0:00:00.848) 0:06:00.861 ******** 2026-04-02 00:54:02.213658 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:54:02.213664 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:54:02.213669 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:54:02.213675 | orchestrator | 2026-04-02 00:54:02.213681 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-02 00:54:02.213687 | orchestrator | Thursday 02 April 2026 00:50:06 +0000 (0:00:00.695) 0:06:01.556 ******** 2026-04-02 00:54:02.213693 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:54:02.213708 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:54:02.213714 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:54:02.213720 | orchestrator | 2026-04-02 00:54:02.213725 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-02 00:54:02.213731 | orchestrator | Thursday 02 April 2026 00:50:07 +0000 (0:00:00.685) 0:06:02.242 ******** 2026-04-02 00:54:02.213737 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.213743 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.213749 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.213755 | orchestrator | 2026-04-02 00:54:02.213760 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-02 00:54:02.213766 | orchestrator | Thursday 02 April 2026 00:50:07 +0000 (0:00:00.262) 0:06:02.504 ******** 2026-04-02 00:54:02.213772 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.213778 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.213784 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.213789 | orchestrator | 2026-04-02 00:54:02.213795 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-02 00:54:02.213801 | orchestrator | Thursday 02 April 2026 00:50:07 +0000 (0:00:00.415) 0:06:02.919 ******** 2026-04-02 00:54:02.213807 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.213813 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.213818 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.213824 | orchestrator | 2026-04-02 00:54:02.213830 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-02 00:54:02.213836 | orchestrator | Thursday 02 April 2026 00:50:07 +0000 (0:00:00.260) 0:06:03.180 ******** 2026-04-02 00:54:02.213842 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:54:02.213847 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:54:02.213853 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:54:02.213859 | orchestrator | 2026-04-02 00:54:02.213865 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-02 00:54:02.213871 | orchestrator | Thursday 02 April 2026 00:50:08 +0000 (0:00:00.705) 0:06:03.886 ******** 2026-04-02 00:54:02.213876 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:54:02.213882 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:54:02.213888 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:54:02.213894 | orchestrator | 2026-04-02 00:54:02.213903 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-02 00:54:02.213909 | orchestrator | Thursday 02 April 2026 00:50:09 +0000 (0:00:00.679) 0:06:04.565 ******** 2026-04-02 00:54:02.213915 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.213920 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.213926 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.213932 | orchestrator | 2026-04-02 00:54:02.213938 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-02 00:54:02.213944 | orchestrator | Thursday 02 April 2026 00:50:09 +0000 (0:00:00.425) 0:06:04.991 ******** 2026-04-02 00:54:02.213950 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.213956 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.213962 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.213969 | orchestrator | 2026-04-02 00:54:02.213975 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-02 00:54:02.213981 | orchestrator | Thursday 02 April 2026 00:50:10 +0000 (0:00:00.262) 0:06:05.254 ******** 2026-04-02 00:54:02.213987 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:54:02.213993 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:54:02.213999 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:54:02.214005 | orchestrator | 2026-04-02 00:54:02.214011 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-02 00:54:02.214041 | orchestrator | Thursday 02 April 2026 00:50:10 +0000 (0:00:00.332) 0:06:05.586 ******** 2026-04-02 00:54:02.214047 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:54:02.214054 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:54:02.214065 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:54:02.214071 | orchestrator | 2026-04-02 00:54:02.214078 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-02 00:54:02.214085 | orchestrator | Thursday 02 April 2026 00:50:10 +0000 (0:00:00.300) 0:06:05.887 ******** 2026-04-02 00:54:02.214092 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:54:02.214099 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:54:02.214106 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:54:02.214112 | orchestrator | 2026-04-02 00:54:02.214118 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-02 00:54:02.214122 | orchestrator | Thursday 02 April 2026 00:50:11 +0000 (0:00:00.488) 0:06:06.376 ******** 2026-04-02 00:54:02.214126 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.214130 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.214148 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.214152 | orchestrator | 2026-04-02 00:54:02.214160 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-02 00:54:02.214164 | orchestrator | Thursday 02 April 2026 00:50:11 +0000 (0:00:00.260) 0:06:06.637 ******** 2026-04-02 00:54:02.214167 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.214171 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.214175 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.214178 | orchestrator | 2026-04-02 00:54:02.214182 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-02 00:54:02.214186 | orchestrator | Thursday 02 April 2026 00:50:11 +0000 (0:00:00.302) 0:06:06.940 ******** 2026-04-02 00:54:02.214190 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.214193 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.214197 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.214201 | orchestrator | 2026-04-02 00:54:02.214205 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-02 00:54:02.214208 | orchestrator | Thursday 02 April 2026 00:50:11 +0000 (0:00:00.265) 0:06:07.205 ******** 2026-04-02 00:54:02.214212 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:54:02.214216 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:54:02.214220 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:54:02.214224 | orchestrator | 2026-04-02 00:54:02.214227 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-02 00:54:02.214231 | orchestrator | Thursday 02 April 2026 00:50:12 +0000 (0:00:00.560) 0:06:07.765 ******** 2026-04-02 00:54:02.214235 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:54:02.214239 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:54:02.214243 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:54:02.214246 | orchestrator | 2026-04-02 00:54:02.214250 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-04-02 00:54:02.214254 | orchestrator | Thursday 02 April 2026 00:50:13 +0000 (0:00:00.523) 0:06:08.289 ******** 2026-04-02 00:54:02.214258 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:54:02.214261 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:54:02.214265 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:54:02.214269 | orchestrator | 2026-04-02 00:54:02.214272 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-04-02 00:54:02.214276 | orchestrator | Thursday 02 April 2026 00:50:13 +0000 (0:00:00.310) 0:06:08.600 ******** 2026-04-02 00:54:02.214280 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-02 00:54:02.214284 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-02 00:54:02.214288 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-02 00:54:02.214291 | orchestrator | 2026-04-02 00:54:02.214295 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-04-02 00:54:02.214299 | orchestrator | Thursday 02 April 2026 00:50:14 +0000 (0:00:00.842) 0:06:09.442 ******** 2026-04-02 00:54:02.214303 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-02 00:54:02.214310 | orchestrator | 2026-04-02 00:54:02.214313 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-04-02 00:54:02.214317 | orchestrator | Thursday 02 April 2026 00:50:14 +0000 (0:00:00.751) 0:06:10.193 ******** 2026-04-02 00:54:02.214321 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.214325 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.214328 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.214332 | orchestrator | 2026-04-02 00:54:02.214336 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-04-02 00:54:02.214340 | orchestrator | Thursday 02 April 2026 00:50:15 +0000 (0:00:00.307) 0:06:10.501 ******** 2026-04-02 00:54:02.214346 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.214350 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.214354 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.214357 | orchestrator | 2026-04-02 00:54:02.214361 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-04-02 00:54:02.214365 | orchestrator | Thursday 02 April 2026 00:50:15 +0000 (0:00:00.298) 0:06:10.799 ******** 2026-04-02 00:54:02.214369 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:54:02.214372 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:54:02.214376 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:54:02.214380 | orchestrator | 2026-04-02 00:54:02.214384 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-04-02 00:54:02.214388 | orchestrator | Thursday 02 April 2026 00:50:16 +0000 (0:00:00.914) 0:06:11.714 ******** 2026-04-02 00:54:02.214391 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:54:02.214395 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:54:02.214399 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:54:02.214403 | orchestrator | 2026-04-02 00:54:02.214406 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-04-02 00:54:02.214410 | orchestrator | Thursday 02 April 2026 00:50:16 +0000 (0:00:00.324) 0:06:12.038 ******** 2026-04-02 00:54:02.214414 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-02 00:54:02.214418 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-02 00:54:02.214422 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-02 00:54:02.214426 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-02 00:54:02.214429 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-02 00:54:02.214433 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-02 00:54:02.214437 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-02 00:54:02.214441 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-02 00:54:02.214449 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-02 00:54:02.214453 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-02 00:54:02.214457 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-02 00:54:02.214464 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-02 00:54:02.214470 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-02 00:54:02.214476 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-02 00:54:02.214481 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-02 00:54:02.214487 | orchestrator | 2026-04-02 00:54:02.214493 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-04-02 00:54:02.214499 | orchestrator | Thursday 02 April 2026 00:50:20 +0000 (0:00:03.196) 0:06:15.235 ******** 2026-04-02 00:54:02.214510 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.214517 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.214523 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.214529 | orchestrator | 2026-04-02 00:54:02.214535 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-04-02 00:54:02.214538 | orchestrator | Thursday 02 April 2026 00:50:20 +0000 (0:00:00.286) 0:06:15.522 ******** 2026-04-02 00:54:02.214542 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-02 00:54:02.214546 | orchestrator | 2026-04-02 00:54:02.214550 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-04-02 00:54:02.214554 | orchestrator | Thursday 02 April 2026 00:50:21 +0000 (0:00:00.747) 0:06:16.269 ******** 2026-04-02 00:54:02.214558 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-02 00:54:02.214561 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-02 00:54:02.214565 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-02 00:54:02.214569 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-04-02 00:54:02.214573 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-04-02 00:54:02.214577 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-04-02 00:54:02.214581 | orchestrator | 2026-04-02 00:54:02.214584 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-04-02 00:54:02.214588 | orchestrator | Thursday 02 April 2026 00:50:22 +0000 (0:00:01.174) 0:06:17.444 ******** 2026-04-02 00:54:02.214592 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-02 00:54:02.214596 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-02 00:54:02.214599 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-02 00:54:02.214603 | orchestrator | 2026-04-02 00:54:02.214607 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-04-02 00:54:02.214611 | orchestrator | Thursday 02 April 2026 00:50:23 +0000 (0:00:01.679) 0:06:19.124 ******** 2026-04-02 00:54:02.214615 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-02 00:54:02.214618 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-02 00:54:02.214622 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:54:02.214626 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-02 00:54:02.214630 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-02 00:54:02.214642 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:54:02.214646 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-02 00:54:02.214650 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-02 00:54:02.214654 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:54:02.214657 | orchestrator | 2026-04-02 00:54:02.214661 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-04-02 00:54:02.214665 | orchestrator | Thursday 02 April 2026 00:50:25 +0000 (0:00:01.348) 0:06:20.472 ******** 2026-04-02 00:54:02.214669 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-02 00:54:02.214673 | orchestrator | 2026-04-02 00:54:02.214676 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-04-02 00:54:02.214680 | orchestrator | Thursday 02 April 2026 00:50:27 +0000 (0:00:01.964) 0:06:22.436 ******** 2026-04-02 00:54:02.214684 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-02 00:54:02.214688 | orchestrator | 2026-04-02 00:54:02.214691 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-04-02 00:54:02.214695 | orchestrator | Thursday 02 April 2026 00:50:27 +0000 (0:00:00.445) 0:06:22.882 ******** 2026-04-02 00:54:02.214699 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-ce3dc94c-dd22-5089-bd64-d73b3d29d8ba', 'data_vg': 'ceph-ce3dc94c-dd22-5089-bd64-d73b3d29d8ba'}) 2026-04-02 00:54:02.214706 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-3f9aa46c-6044-534e-8fed-f8e8e1b6cabb', 'data_vg': 'ceph-3f9aa46c-6044-534e-8fed-f8e8e1b6cabb'}) 2026-04-02 00:54:02.214710 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-88a5a1a0-9236-5c9d-8025-e39ec03fb505', 'data_vg': 'ceph-88a5a1a0-9236-5c9d-8025-e39ec03fb505'}) 2026-04-02 00:54:02.214714 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-bc329f0f-76ef-5b6a-a482-1349b51ce957', 'data_vg': 'ceph-bc329f0f-76ef-5b6a-a482-1349b51ce957'}) 2026-04-02 00:54:02.214720 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-c3a3e1f2-53da-5696-b7a3-d36d02964763', 'data_vg': 'ceph-c3a3e1f2-53da-5696-b7a3-d36d02964763'}) 2026-04-02 00:54:02.214725 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-b27c5b00-4597-5124-934a-fd641c3feb65', 'data_vg': 'ceph-b27c5b00-4597-5124-934a-fd641c3feb65'}) 2026-04-02 00:54:02.214728 | orchestrator | 2026-04-02 00:54:02.214732 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-04-02 00:54:02.214736 | orchestrator | Thursday 02 April 2026 00:51:02 +0000 (0:00:35.288) 0:06:58.170 ******** 2026-04-02 00:54:02.214740 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.214743 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.214747 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.214751 | orchestrator | 2026-04-02 00:54:02.214755 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-04-02 00:54:02.214759 | orchestrator | Thursday 02 April 2026 00:51:03 +0000 (0:00:00.640) 0:06:58.811 ******** 2026-04-02 00:54:02.214762 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-02 00:54:02.214766 | orchestrator | 2026-04-02 00:54:02.214770 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-04-02 00:54:02.214774 | orchestrator | Thursday 02 April 2026 00:51:04 +0000 (0:00:00.548) 0:06:59.360 ******** 2026-04-02 00:54:02.214777 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:54:02.214781 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:54:02.214785 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:54:02.214789 | orchestrator | 2026-04-02 00:54:02.214793 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-04-02 00:54:02.214796 | orchestrator | Thursday 02 April 2026 00:51:04 +0000 (0:00:00.626) 0:06:59.986 ******** 2026-04-02 00:54:02.214800 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:54:02.214804 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:54:02.214808 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:54:02.214811 | orchestrator | 2026-04-02 00:54:02.214815 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-04-02 00:54:02.214819 | orchestrator | Thursday 02 April 2026 00:51:06 +0000 (0:00:01.594) 0:07:01.581 ******** 2026-04-02 00:54:02.214823 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-02 00:54:02.214826 | orchestrator | 2026-04-02 00:54:02.214830 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-04-02 00:54:02.214834 | orchestrator | Thursday 02 April 2026 00:51:06 +0000 (0:00:00.451) 0:07:02.032 ******** 2026-04-02 00:54:02.214838 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:54:02.214841 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:54:02.214845 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:54:02.214849 | orchestrator | 2026-04-02 00:54:02.214853 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-04-02 00:54:02.214857 | orchestrator | Thursday 02 April 2026 00:51:07 +0000 (0:00:01.027) 0:07:03.059 ******** 2026-04-02 00:54:02.214860 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:54:02.214864 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:54:02.214868 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:54:02.214872 | orchestrator | 2026-04-02 00:54:02.214875 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-04-02 00:54:02.214884 | orchestrator | Thursday 02 April 2026 00:51:09 +0000 (0:00:01.179) 0:07:04.239 ******** 2026-04-02 00:54:02.214925 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:54:02.214932 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:54:02.214939 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:54:02.214945 | orchestrator | 2026-04-02 00:54:02.214951 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-04-02 00:54:02.214959 | orchestrator | Thursday 02 April 2026 00:51:11 +0000 (0:00:02.039) 0:07:06.278 ******** 2026-04-02 00:54:02.214966 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.214972 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.214978 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.214984 | orchestrator | 2026-04-02 00:54:02.214989 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-04-02 00:54:02.214995 | orchestrator | Thursday 02 April 2026 00:51:11 +0000 (0:00:00.291) 0:07:06.569 ******** 2026-04-02 00:54:02.215001 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.215007 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.215013 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.215018 | orchestrator | 2026-04-02 00:54:02.215024 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-04-02 00:54:02.215030 | orchestrator | Thursday 02 April 2026 00:51:11 +0000 (0:00:00.283) 0:07:06.852 ******** 2026-04-02 00:54:02.215036 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-02 00:54:02.215042 | orchestrator | ok: [testbed-node-4] => (item=4) 2026-04-02 00:54:02.215047 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-04-02 00:54:02.215054 | orchestrator | ok: [testbed-node-3] => (item=5) 2026-04-02 00:54:02.215060 | orchestrator | ok: [testbed-node-4] => (item=1) 2026-04-02 00:54:02.215066 | orchestrator | ok: [testbed-node-5] => (item=3) 2026-04-02 00:54:02.215072 | orchestrator | 2026-04-02 00:54:02.215077 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-04-02 00:54:02.215083 | orchestrator | Thursday 02 April 2026 00:51:12 +0000 (0:00:01.321) 0:07:08.174 ******** 2026-04-02 00:54:02.215089 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-04-02 00:54:02.215095 | orchestrator | changed: [testbed-node-4] => (item=4) 2026-04-02 00:54:02.215101 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-04-02 00:54:02.215107 | orchestrator | changed: [testbed-node-3] => (item=5) 2026-04-02 00:54:02.215112 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-04-02 00:54:02.215118 | orchestrator | changed: [testbed-node-5] => (item=3) 2026-04-02 00:54:02.215124 | orchestrator | 2026-04-02 00:54:02.215130 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-04-02 00:54:02.215166 | orchestrator | Thursday 02 April 2026 00:51:14 +0000 (0:00:01.916) 0:07:10.091 ******** 2026-04-02 00:54:02.215172 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-04-02 00:54:02.215178 | orchestrator | changed: [testbed-node-4] => (item=4) 2026-04-02 00:54:02.215188 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-04-02 00:54:02.215194 | orchestrator | changed: [testbed-node-3] => (item=5) 2026-04-02 00:54:02.215200 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-04-02 00:54:02.215205 | orchestrator | changed: [testbed-node-5] => (item=3) 2026-04-02 00:54:02.215211 | orchestrator | 2026-04-02 00:54:02.215217 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-04-02 00:54:02.215223 | orchestrator | Thursday 02 April 2026 00:51:18 +0000 (0:00:03.420) 0:07:13.511 ******** 2026-04-02 00:54:02.215229 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.215234 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.215240 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-02 00:54:02.215246 | orchestrator | 2026-04-02 00:54:02.215252 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-04-02 00:54:02.215258 | orchestrator | Thursday 02 April 2026 00:51:20 +0000 (0:00:02.220) 0:07:15.731 ******** 2026-04-02 00:54:02.215268 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.215274 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.215280 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-04-02 00:54:02.215286 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-02 00:54:02.215291 | orchestrator | 2026-04-02 00:54:02.215297 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-04-02 00:54:02.215303 | orchestrator | Thursday 02 April 2026 00:51:32 +0000 (0:00:12.432) 0:07:28.163 ******** 2026-04-02 00:54:02.215309 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.215315 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.215320 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.215326 | orchestrator | 2026-04-02 00:54:02.215332 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-02 00:54:02.215338 | orchestrator | Thursday 02 April 2026 00:51:33 +0000 (0:00:00.857) 0:07:29.021 ******** 2026-04-02 00:54:02.215344 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.215350 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.215356 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.215362 | orchestrator | 2026-04-02 00:54:02.215368 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-04-02 00:54:02.215375 | orchestrator | Thursday 02 April 2026 00:51:34 +0000 (0:00:00.613) 0:07:29.635 ******** 2026-04-02 00:54:02.215381 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-02 00:54:02.215388 | orchestrator | 2026-04-02 00:54:02.215394 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-04-02 00:54:02.215401 | orchestrator | Thursday 02 April 2026 00:51:34 +0000 (0:00:00.550) 0:07:30.185 ******** 2026-04-02 00:54:02.215405 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-02 00:54:02.215409 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-02 00:54:02.215413 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-02 00:54:02.215416 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.215420 | orchestrator | 2026-04-02 00:54:02.215424 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-04-02 00:54:02.215428 | orchestrator | Thursday 02 April 2026 00:51:35 +0000 (0:00:00.417) 0:07:30.603 ******** 2026-04-02 00:54:02.215431 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.215435 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.215439 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.215443 | orchestrator | 2026-04-02 00:54:02.215449 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-04-02 00:54:02.215453 | orchestrator | Thursday 02 April 2026 00:51:35 +0000 (0:00:00.305) 0:07:30.909 ******** 2026-04-02 00:54:02.215457 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.215460 | orchestrator | 2026-04-02 00:54:02.215464 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-04-02 00:54:02.215468 | orchestrator | Thursday 02 April 2026 00:51:35 +0000 (0:00:00.229) 0:07:31.139 ******** 2026-04-02 00:54:02.215472 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.215475 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.215479 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.215483 | orchestrator | 2026-04-02 00:54:02.215487 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-04-02 00:54:02.215490 | orchestrator | Thursday 02 April 2026 00:51:36 +0000 (0:00:00.615) 0:07:31.755 ******** 2026-04-02 00:54:02.215494 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.215498 | orchestrator | 2026-04-02 00:54:02.215502 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-04-02 00:54:02.215505 | orchestrator | Thursday 02 April 2026 00:51:36 +0000 (0:00:00.234) 0:07:31.989 ******** 2026-04-02 00:54:02.215512 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.215516 | orchestrator | 2026-04-02 00:54:02.215520 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-04-02 00:54:02.215524 | orchestrator | Thursday 02 April 2026 00:51:36 +0000 (0:00:00.226) 0:07:32.216 ******** 2026-04-02 00:54:02.215528 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.215531 | orchestrator | 2026-04-02 00:54:02.215535 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-04-02 00:54:02.215539 | orchestrator | Thursday 02 April 2026 00:51:37 +0000 (0:00:00.113) 0:07:32.329 ******** 2026-04-02 00:54:02.215543 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.215546 | orchestrator | 2026-04-02 00:54:02.215550 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-04-02 00:54:02.215554 | orchestrator | Thursday 02 April 2026 00:51:37 +0000 (0:00:00.220) 0:07:32.550 ******** 2026-04-02 00:54:02.215558 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.215580 | orchestrator | 2026-04-02 00:54:02.215584 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-04-02 00:54:02.215588 | orchestrator | Thursday 02 April 2026 00:51:37 +0000 (0:00:00.211) 0:07:32.762 ******** 2026-04-02 00:54:02.215595 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-02 00:54:02.215598 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-02 00:54:02.215602 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-02 00:54:02.215606 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.215610 | orchestrator | 2026-04-02 00:54:02.215614 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-04-02 00:54:02.215618 | orchestrator | Thursday 02 April 2026 00:51:37 +0000 (0:00:00.394) 0:07:33.157 ******** 2026-04-02 00:54:02.215621 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.215625 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.215629 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.215633 | orchestrator | 2026-04-02 00:54:02.215636 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-04-02 00:54:02.215640 | orchestrator | Thursday 02 April 2026 00:51:38 +0000 (0:00:00.304) 0:07:33.461 ******** 2026-04-02 00:54:02.215644 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.215648 | orchestrator | 2026-04-02 00:54:02.215652 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-04-02 00:54:02.215655 | orchestrator | Thursday 02 April 2026 00:51:39 +0000 (0:00:00.763) 0:07:34.225 ******** 2026-04-02 00:54:02.215659 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.215663 | orchestrator | 2026-04-02 00:54:02.215667 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-04-02 00:54:02.215670 | orchestrator | 2026-04-02 00:54:02.215674 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-02 00:54:02.215678 | orchestrator | Thursday 02 April 2026 00:51:39 +0000 (0:00:00.650) 0:07:34.875 ******** 2026-04-02 00:54:02.215682 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 00:54:02.215687 | orchestrator | 2026-04-02 00:54:02.215691 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-02 00:54:02.215694 | orchestrator | Thursday 02 April 2026 00:51:40 +0000 (0:00:01.166) 0:07:36.042 ******** 2026-04-02 00:54:02.215698 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 00:54:02.215702 | orchestrator | 2026-04-02 00:54:02.215706 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-02 00:54:02.215710 | orchestrator | Thursday 02 April 2026 00:51:42 +0000 (0:00:01.201) 0:07:37.243 ******** 2026-04-02 00:54:02.215714 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.215718 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.215724 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.215728 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:54:02.215732 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:54:02.215736 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:54:02.215739 | orchestrator | 2026-04-02 00:54:02.215743 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-02 00:54:02.215747 | orchestrator | Thursday 02 April 2026 00:51:43 +0000 (0:00:01.116) 0:07:38.360 ******** 2026-04-02 00:54:02.215751 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.215755 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.215758 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:54:02.215762 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:54:02.215766 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.215770 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:54:02.215773 | orchestrator | 2026-04-02 00:54:02.215777 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-02 00:54:02.215783 | orchestrator | Thursday 02 April 2026 00:51:43 +0000 (0:00:00.663) 0:07:39.023 ******** 2026-04-02 00:54:02.215787 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.215791 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.215795 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:54:02.215798 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:54:02.215802 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.215806 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:54:02.215810 | orchestrator | 2026-04-02 00:54:02.215813 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-02 00:54:02.215817 | orchestrator | Thursday 02 April 2026 00:51:44 +0000 (0:00:00.865) 0:07:39.889 ******** 2026-04-02 00:54:02.215821 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.215825 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:54:02.215829 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.215832 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:54:02.215836 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.215840 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:54:02.215844 | orchestrator | 2026-04-02 00:54:02.215848 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-02 00:54:02.215851 | orchestrator | Thursday 02 April 2026 00:51:45 +0000 (0:00:00.757) 0:07:40.646 ******** 2026-04-02 00:54:02.215855 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.215859 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.215863 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.215867 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:54:02.215870 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:54:02.215874 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:54:02.215878 | orchestrator | 2026-04-02 00:54:02.215882 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-02 00:54:02.215886 | orchestrator | Thursday 02 April 2026 00:51:46 +0000 (0:00:01.220) 0:07:41.867 ******** 2026-04-02 00:54:02.215889 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.215893 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.215897 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.215901 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.215904 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.215908 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.215912 | orchestrator | 2026-04-02 00:54:02.215916 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-02 00:54:02.215919 | orchestrator | Thursday 02 April 2026 00:51:47 +0000 (0:00:00.511) 0:07:42.379 ******** 2026-04-02 00:54:02.215923 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.215929 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.215933 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.215937 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.215941 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.215944 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.215952 | orchestrator | 2026-04-02 00:54:02.215956 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-02 00:54:02.215960 | orchestrator | Thursday 02 April 2026 00:51:47 +0000 (0:00:00.489) 0:07:42.869 ******** 2026-04-02 00:54:02.215964 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:54:02.215968 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:54:02.215972 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:54:02.215975 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:54:02.215979 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:54:02.215983 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:54:02.215987 | orchestrator | 2026-04-02 00:54:02.215991 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-02 00:54:02.215994 | orchestrator | Thursday 02 April 2026 00:51:48 +0000 (0:00:01.187) 0:07:44.057 ******** 2026-04-02 00:54:02.215998 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:54:02.216002 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:54:02.216006 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:54:02.216010 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:54:02.216013 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:54:02.216017 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:54:02.216021 | orchestrator | 2026-04-02 00:54:02.216025 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-02 00:54:02.216028 | orchestrator | Thursday 02 April 2026 00:51:49 +0000 (0:00:01.030) 0:07:45.088 ******** 2026-04-02 00:54:02.216032 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.216036 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.216040 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.216044 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.216051 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.216057 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.216063 | orchestrator | 2026-04-02 00:54:02.216069 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-02 00:54:02.216076 | orchestrator | Thursday 02 April 2026 00:51:50 +0000 (0:00:00.659) 0:07:45.748 ******** 2026-04-02 00:54:02.216082 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.216089 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.216096 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.216102 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:54:02.216108 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:54:02.216112 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:54:02.216116 | orchestrator | 2026-04-02 00:54:02.216119 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-02 00:54:02.216123 | orchestrator | Thursday 02 April 2026 00:51:51 +0000 (0:00:00.493) 0:07:46.241 ******** 2026-04-02 00:54:02.216127 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:54:02.216143 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:54:02.216150 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:54:02.216156 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.216161 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.216166 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.216172 | orchestrator | 2026-04-02 00:54:02.216178 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-02 00:54:02.216185 | orchestrator | Thursday 02 April 2026 00:51:51 +0000 (0:00:00.657) 0:07:46.898 ******** 2026-04-02 00:54:02.216191 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:54:02.216198 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:54:02.216204 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:54:02.216210 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.216216 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.216222 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.216226 | orchestrator | 2026-04-02 00:54:02.216232 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-02 00:54:02.216236 | orchestrator | Thursday 02 April 2026 00:51:52 +0000 (0:00:00.487) 0:07:47.385 ******** 2026-04-02 00:54:02.216240 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:54:02.216247 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:54:02.216251 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:54:02.216254 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.216258 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.216262 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.216266 | orchestrator | 2026-04-02 00:54:02.216269 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-02 00:54:02.216273 | orchestrator | Thursday 02 April 2026 00:51:52 +0000 (0:00:00.688) 0:07:48.074 ******** 2026-04-02 00:54:02.216277 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.216281 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.216284 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.216288 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.216292 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.216296 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.216299 | orchestrator | 2026-04-02 00:54:02.216303 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-02 00:54:02.216307 | orchestrator | Thursday 02 April 2026 00:51:53 +0000 (0:00:00.504) 0:07:48.579 ******** 2026-04-02 00:54:02.216311 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.216314 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.216318 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.216322 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:02.216326 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:02.216329 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:02.216333 | orchestrator | 2026-04-02 00:54:02.216337 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-02 00:54:02.216341 | orchestrator | Thursday 02 April 2026 00:51:54 +0000 (0:00:00.658) 0:07:49.238 ******** 2026-04-02 00:54:02.216345 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.216348 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.216352 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.216356 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:54:02.216359 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:54:02.216363 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:54:02.216367 | orchestrator | 2026-04-02 00:54:02.216371 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-02 00:54:02.216378 | orchestrator | Thursday 02 April 2026 00:51:54 +0000 (0:00:00.508) 0:07:49.746 ******** 2026-04-02 00:54:02.216382 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:54:02.216385 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:54:02.216389 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:54:02.216393 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:54:02.216397 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:54:02.216400 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:54:02.216404 | orchestrator | 2026-04-02 00:54:02.216408 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-02 00:54:02.216412 | orchestrator | Thursday 02 April 2026 00:51:55 +0000 (0:00:00.737) 0:07:50.484 ******** 2026-04-02 00:54:02.216416 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:54:02.216419 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:54:02.216423 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:54:02.216427 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:54:02.216430 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:54:02.216434 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:54:02.216438 | orchestrator | 2026-04-02 00:54:02.216442 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-04-02 00:54:02.216446 | orchestrator | Thursday 02 April 2026 00:51:56 +0000 (0:00:01.028) 0:07:51.512 ******** 2026-04-02 00:54:02.216449 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-02 00:54:02.216453 | orchestrator | 2026-04-02 00:54:02.216457 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-04-02 00:54:02.216461 | orchestrator | Thursday 02 April 2026 00:51:59 +0000 (0:00:03.192) 0:07:54.704 ******** 2026-04-02 00:54:02.216468 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-02 00:54:02.216472 | orchestrator | 2026-04-02 00:54:02.216477 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-04-02 00:54:02.216484 | orchestrator | Thursday 02 April 2026 00:52:01 +0000 (0:00:01.637) 0:07:56.342 ******** 2026-04-02 00:54:02.216490 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:54:02.216496 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:54:02.216502 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:54:02.216508 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:54:02.216514 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:54:02.216521 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:54:02.216527 | orchestrator | 2026-04-02 00:54:02.216534 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-04-02 00:54:02.216538 | orchestrator | Thursday 02 April 2026 00:52:02 +0000 (0:00:01.495) 0:07:57.837 ******** 2026-04-02 00:54:02.216542 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:54:02.216545 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:54:02.216549 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:54:02.216553 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:54:02.216557 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:54:02.216560 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:54:02.216564 | orchestrator | 2026-04-02 00:54:02.216568 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-04-02 00:54:02.216572 | orchestrator | Thursday 02 April 2026 00:52:03 +0000 (0:00:01.099) 0:07:58.937 ******** 2026-04-02 00:54:02.216576 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 00:54:02.216581 | orchestrator | 2026-04-02 00:54:02.216584 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-04-02 00:54:02.216588 | orchestrator | Thursday 02 April 2026 00:52:04 +0000 (0:00:01.065) 0:08:00.002 ******** 2026-04-02 00:54:02.216592 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:54:02.216596 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:54:02.216599 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:54:02.216603 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:54:02.216607 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:54:02.216613 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:54:02.216617 | orchestrator | 2026-04-02 00:54:02.216621 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-04-02 00:54:02.216624 | orchestrator | Thursday 02 April 2026 00:52:06 +0000 (0:00:01.494) 0:08:01.496 ******** 2026-04-02 00:54:02.216628 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:54:02.216632 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:54:02.216635 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:54:02.216639 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:54:02.216643 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:54:02.216647 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:54:02.216650 | orchestrator | 2026-04-02 00:54:02.216654 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-04-02 00:54:02.216658 | orchestrator | Thursday 02 April 2026 00:52:10 +0000 (0:00:04.030) 0:08:05.527 ******** 2026-04-02 00:54:02.216662 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 00:54:02.216666 | orchestrator | 2026-04-02 00:54:02.216669 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-04-02 00:54:02.216673 | orchestrator | Thursday 02 April 2026 00:52:11 +0000 (0:00:01.058) 0:08:06.586 ******** 2026-04-02 00:54:02.216677 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:54:02.216681 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:54:02.216684 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:54:02.216688 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:54:02.216695 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:54:02.216699 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:54:02.216702 | orchestrator | 2026-04-02 00:54:02.216706 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-04-02 00:54:02.216710 | orchestrator | Thursday 02 April 2026 00:52:11 +0000 (0:00:00.526) 0:08:07.112 ******** 2026-04-02 00:54:02.216714 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:54:02.216717 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:54:02.216721 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:54:02.216725 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:54:02.216729 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:54:02.216732 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:54:02.216736 | orchestrator | 2026-04-02 00:54:02.216740 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-04-02 00:54:02.216746 | orchestrator | Thursday 02 April 2026 00:52:13 +0000 (0:00:02.021) 0:08:09.134 ******** 2026-04-02 00:54:02.216750 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:54:02.216754 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:54:02.216757 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:54:02.216761 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:54:02.216765 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:54:02.216769 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:54:02.216772 | orchestrator | 2026-04-02 00:54:02.216776 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-04-02 00:54:02.216780 | orchestrator | 2026-04-02 00:54:02.216784 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-02 00:54:02.216788 | orchestrator | Thursday 02 April 2026 00:52:14 +0000 (0:00:00.745) 0:08:09.880 ******** 2026-04-02 00:54:02.216791 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-02 00:54:02.216795 | orchestrator | 2026-04-02 00:54:02.216799 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-02 00:54:02.216803 | orchestrator | Thursday 02 April 2026 00:52:15 +0000 (0:00:00.672) 0:08:10.552 ******** 2026-04-02 00:54:02.216806 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-02 00:54:02.216810 | orchestrator | 2026-04-02 00:54:02.216814 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-02 00:54:02.216818 | orchestrator | Thursday 02 April 2026 00:52:15 +0000 (0:00:00.473) 0:08:11.025 ******** 2026-04-02 00:54:02.216822 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.216825 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.216829 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.216833 | orchestrator | 2026-04-02 00:54:02.216837 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-02 00:54:02.216840 | orchestrator | Thursday 02 April 2026 00:52:16 +0000 (0:00:00.423) 0:08:11.449 ******** 2026-04-02 00:54:02.216844 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:54:02.216848 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:54:02.216851 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:54:02.216855 | orchestrator | 2026-04-02 00:54:02.216859 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-02 00:54:02.216863 | orchestrator | Thursday 02 April 2026 00:52:16 +0000 (0:00:00.680) 0:08:12.129 ******** 2026-04-02 00:54:02.216866 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:54:02.216870 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:54:02.216874 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:54:02.216878 | orchestrator | 2026-04-02 00:54:02.216881 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-02 00:54:02.216885 | orchestrator | Thursday 02 April 2026 00:52:17 +0000 (0:00:00.705) 0:08:12.835 ******** 2026-04-02 00:54:02.216889 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:54:02.216893 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:54:02.216896 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:54:02.216902 | orchestrator | 2026-04-02 00:54:02.216906 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-02 00:54:02.216910 | orchestrator | Thursday 02 April 2026 00:52:18 +0000 (0:00:00.674) 0:08:13.510 ******** 2026-04-02 00:54:02.216914 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.216918 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.216921 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.216925 | orchestrator | 2026-04-02 00:54:02.216929 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-02 00:54:02.216932 | orchestrator | Thursday 02 April 2026 00:52:18 +0000 (0:00:00.438) 0:08:13.948 ******** 2026-04-02 00:54:02.216936 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.216940 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.216946 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.216950 | orchestrator | 2026-04-02 00:54:02.216954 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-02 00:54:02.216957 | orchestrator | Thursday 02 April 2026 00:52:18 +0000 (0:00:00.262) 0:08:14.211 ******** 2026-04-02 00:54:02.216961 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.216965 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.216969 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.216973 | orchestrator | 2026-04-02 00:54:02.216976 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-02 00:54:02.216980 | orchestrator | Thursday 02 April 2026 00:52:19 +0000 (0:00:00.321) 0:08:14.532 ******** 2026-04-02 00:54:02.216984 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:54:02.216988 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:54:02.216991 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:54:02.216995 | orchestrator | 2026-04-02 00:54:02.216999 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-02 00:54:02.217003 | orchestrator | Thursday 02 April 2026 00:52:20 +0000 (0:00:00.691) 0:08:15.224 ******** 2026-04-02 00:54:02.217006 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:54:02.217010 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:54:02.217014 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:54:02.217018 | orchestrator | 2026-04-02 00:54:02.217021 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-02 00:54:02.217025 | orchestrator | Thursday 02 April 2026 00:52:20 +0000 (0:00:00.996) 0:08:16.220 ******** 2026-04-02 00:54:02.217029 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.217032 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.217036 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.217040 | orchestrator | 2026-04-02 00:54:02.217045 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-02 00:54:02.217051 | orchestrator | Thursday 02 April 2026 00:52:21 +0000 (0:00:00.297) 0:08:16.518 ******** 2026-04-02 00:54:02.217057 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.217063 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.217070 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.217076 | orchestrator | 2026-04-02 00:54:02.217083 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-02 00:54:02.217089 | orchestrator | Thursday 02 April 2026 00:52:21 +0000 (0:00:00.314) 0:08:16.833 ******** 2026-04-02 00:54:02.217095 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:54:02.217099 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:54:02.217105 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:54:02.217111 | orchestrator | 2026-04-02 00:54:02.217117 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-02 00:54:02.217123 | orchestrator | Thursday 02 April 2026 00:52:21 +0000 (0:00:00.320) 0:08:17.154 ******** 2026-04-02 00:54:02.217128 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:54:02.217145 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:54:02.217151 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:54:02.217157 | orchestrator | 2026-04-02 00:54:02.217163 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-02 00:54:02.217173 | orchestrator | Thursday 02 April 2026 00:52:22 +0000 (0:00:00.646) 0:08:17.800 ******** 2026-04-02 00:54:02.217179 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:54:02.217185 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:54:02.217190 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:54:02.217196 | orchestrator | 2026-04-02 00:54:02.217202 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-02 00:54:02.217208 | orchestrator | Thursday 02 April 2026 00:52:22 +0000 (0:00:00.319) 0:08:18.120 ******** 2026-04-02 00:54:02.217214 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.217219 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.217225 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.217231 | orchestrator | 2026-04-02 00:54:02.217237 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-02 00:54:02.217242 | orchestrator | Thursday 02 April 2026 00:52:23 +0000 (0:00:00.272) 0:08:18.392 ******** 2026-04-02 00:54:02.217248 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.217254 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.217260 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.217266 | orchestrator | 2026-04-02 00:54:02.217271 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-02 00:54:02.217277 | orchestrator | Thursday 02 April 2026 00:52:23 +0000 (0:00:00.291) 0:08:18.684 ******** 2026-04-02 00:54:02.217283 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.217289 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.217295 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.217300 | orchestrator | 2026-04-02 00:54:02.217306 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-02 00:54:02.217312 | orchestrator | Thursday 02 April 2026 00:52:24 +0000 (0:00:00.548) 0:08:19.233 ******** 2026-04-02 00:54:02.217317 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:54:02.217323 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:54:02.217329 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:54:02.217335 | orchestrator | 2026-04-02 00:54:02.217341 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-02 00:54:02.217346 | orchestrator | Thursday 02 April 2026 00:52:24 +0000 (0:00:00.301) 0:08:19.535 ******** 2026-04-02 00:54:02.217352 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:54:02.217358 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:54:02.217364 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:54:02.217370 | orchestrator | 2026-04-02 00:54:02.217375 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-04-02 00:54:02.217381 | orchestrator | Thursday 02 April 2026 00:52:24 +0000 (0:00:00.521) 0:08:20.056 ******** 2026-04-02 00:54:02.217387 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.217393 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.217399 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-04-02 00:54:02.217405 | orchestrator | 2026-04-02 00:54:02.217410 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-04-02 00:54:02.217416 | orchestrator | Thursday 02 April 2026 00:52:25 +0000 (0:00:00.633) 0:08:20.689 ******** 2026-04-02 00:54:02.217422 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-02 00:54:02.217428 | orchestrator | 2026-04-02 00:54:02.217436 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-04-02 00:54:02.217442 | orchestrator | Thursday 02 April 2026 00:52:27 +0000 (0:00:01.806) 0:08:22.495 ******** 2026-04-02 00:54:02.217449 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-04-02 00:54:02.217456 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.217462 | orchestrator | 2026-04-02 00:54:02.217468 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-04-02 00:54:02.217478 | orchestrator | Thursday 02 April 2026 00:52:27 +0000 (0:00:00.197) 0:08:22.693 ******** 2026-04-02 00:54:02.217485 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-02 00:54:02.217495 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-02 00:54:02.217501 | orchestrator | 2026-04-02 00:54:02.217508 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-04-02 00:54:02.217513 | orchestrator | Thursday 02 April 2026 00:52:33 +0000 (0:00:06.379) 0:08:29.072 ******** 2026-04-02 00:54:02.217519 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-02 00:54:02.217525 | orchestrator | 2026-04-02 00:54:02.217531 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-04-02 00:54:02.217537 | orchestrator | Thursday 02 April 2026 00:52:36 +0000 (0:00:02.837) 0:08:31.909 ******** 2026-04-02 00:54:02.217547 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-02 00:54:02.217554 | orchestrator | 2026-04-02 00:54:02.217561 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-04-02 00:54:02.217567 | orchestrator | Thursday 02 April 2026 00:52:37 +0000 (0:00:00.717) 0:08:32.627 ******** 2026-04-02 00:54:02.217574 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-02 00:54:02.217578 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-02 00:54:02.217582 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-02 00:54:02.217586 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-04-02 00:54:02.217589 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-04-02 00:54:02.217593 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-04-02 00:54:02.217597 | orchestrator | 2026-04-02 00:54:02.217601 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-04-02 00:54:02.217604 | orchestrator | Thursday 02 April 2026 00:52:38 +0000 (0:00:01.108) 0:08:33.735 ******** 2026-04-02 00:54:02.217608 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-02 00:54:02.217612 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-02 00:54:02.217615 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-02 00:54:02.217619 | orchestrator | 2026-04-02 00:54:02.217623 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-04-02 00:54:02.217627 | orchestrator | Thursday 02 April 2026 00:52:40 +0000 (0:00:01.827) 0:08:35.563 ******** 2026-04-02 00:54:02.217630 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-02 00:54:02.217634 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-02 00:54:02.217638 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:54:02.217642 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-02 00:54:02.217645 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-02 00:54:02.217649 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:54:02.217653 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-02 00:54:02.217656 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-02 00:54:02.217660 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:54:02.217664 | orchestrator | 2026-04-02 00:54:02.217667 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-04-02 00:54:02.217671 | orchestrator | Thursday 02 April 2026 00:52:41 +0000 (0:00:01.186) 0:08:36.750 ******** 2026-04-02 00:54:02.217679 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:54:02.217682 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:54:02.217686 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:54:02.217690 | orchestrator | 2026-04-02 00:54:02.217694 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-04-02 00:54:02.217697 | orchestrator | Thursday 02 April 2026 00:52:44 +0000 (0:00:02.771) 0:08:39.521 ******** 2026-04-02 00:54:02.217701 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.217705 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.217708 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.217712 | orchestrator | 2026-04-02 00:54:02.217716 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-04-02 00:54:02.217719 | orchestrator | Thursday 02 April 2026 00:52:44 +0000 (0:00:00.432) 0:08:39.954 ******** 2026-04-02 00:54:02.217723 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-02 00:54:02.217727 | orchestrator | 2026-04-02 00:54:02.217733 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-04-02 00:54:02.217737 | orchestrator | Thursday 02 April 2026 00:52:45 +0000 (0:00:00.822) 0:08:40.776 ******** 2026-04-02 00:54:02.217741 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-5, testbed-node-4 2026-04-02 00:54:02.217745 | orchestrator | 2026-04-02 00:54:02.217749 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-04-02 00:54:02.217752 | orchestrator | Thursday 02 April 2026 00:52:46 +0000 (0:00:00.892) 0:08:41.668 ******** 2026-04-02 00:54:02.217756 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:54:02.217760 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:54:02.217764 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:54:02.217767 | orchestrator | 2026-04-02 00:54:02.217771 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-04-02 00:54:02.217775 | orchestrator | Thursday 02 April 2026 00:52:47 +0000 (0:00:01.523) 0:08:43.192 ******** 2026-04-02 00:54:02.217778 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:54:02.217782 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:54:02.217786 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:54:02.217789 | orchestrator | 2026-04-02 00:54:02.217793 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-04-02 00:54:02.217797 | orchestrator | Thursday 02 April 2026 00:52:49 +0000 (0:00:01.487) 0:08:44.680 ******** 2026-04-02 00:54:02.217801 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:54:02.217804 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:54:02.217808 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:54:02.217812 | orchestrator | 2026-04-02 00:54:02.217815 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-04-02 00:54:02.217819 | orchestrator | Thursday 02 April 2026 00:52:51 +0000 (0:00:02.272) 0:08:46.952 ******** 2026-04-02 00:54:02.217823 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:54:02.217826 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:54:02.217830 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:54:02.217834 | orchestrator | 2026-04-02 00:54:02.217838 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-04-02 00:54:02.217841 | orchestrator | Thursday 02 April 2026 00:52:54 +0000 (0:00:02.589) 0:08:49.542 ******** 2026-04-02 00:54:02.217845 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:54:02.217849 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:54:02.217852 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:54:02.217856 | orchestrator | 2026-04-02 00:54:02.217862 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-02 00:54:02.217866 | orchestrator | Thursday 02 April 2026 00:52:55 +0000 (0:00:01.360) 0:08:50.903 ******** 2026-04-02 00:54:02.217870 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:54:02.217873 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:54:02.217877 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:54:02.217884 | orchestrator | 2026-04-02 00:54:02.217888 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-04-02 00:54:02.217892 | orchestrator | Thursday 02 April 2026 00:52:56 +0000 (0:00:01.009) 0:08:51.912 ******** 2026-04-02 00:54:02.217896 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-02 00:54:02.217899 | orchestrator | 2026-04-02 00:54:02.217903 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-04-02 00:54:02.217907 | orchestrator | Thursday 02 April 2026 00:52:57 +0000 (0:00:00.557) 0:08:52.470 ******** 2026-04-02 00:54:02.217910 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:54:02.217914 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:54:02.217918 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:54:02.217922 | orchestrator | 2026-04-02 00:54:02.217925 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-04-02 00:54:02.217929 | orchestrator | Thursday 02 April 2026 00:52:57 +0000 (0:00:00.328) 0:08:52.798 ******** 2026-04-02 00:54:02.217933 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:54:02.217936 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:54:02.217940 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:54:02.217944 | orchestrator | 2026-04-02 00:54:02.217948 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-04-02 00:54:02.217951 | orchestrator | Thursday 02 April 2026 00:52:59 +0000 (0:00:01.498) 0:08:54.297 ******** 2026-04-02 00:54:02.217955 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-02 00:54:02.217959 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-02 00:54:02.217962 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-02 00:54:02.217966 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.217970 | orchestrator | 2026-04-02 00:54:02.217973 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-04-02 00:54:02.217977 | orchestrator | Thursday 02 April 2026 00:52:59 +0000 (0:00:00.595) 0:08:54.892 ******** 2026-04-02 00:54:02.217981 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:54:02.217985 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:54:02.217988 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:54:02.217992 | orchestrator | 2026-04-02 00:54:02.217996 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-04-02 00:54:02.217999 | orchestrator | 2026-04-02 00:54:02.218003 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-02 00:54:02.218007 | orchestrator | Thursday 02 April 2026 00:53:00 +0000 (0:00:00.589) 0:08:55.482 ******** 2026-04-02 00:54:02.218011 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-02 00:54:02.218054 | orchestrator | 2026-04-02 00:54:02.218061 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-02 00:54:02.218067 | orchestrator | Thursday 02 April 2026 00:53:00 +0000 (0:00:00.708) 0:08:56.190 ******** 2026-04-02 00:54:02.218073 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-02 00:54:02.218079 | orchestrator | 2026-04-02 00:54:02.218089 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-02 00:54:02.218096 | orchestrator | Thursday 02 April 2026 00:53:01 +0000 (0:00:00.476) 0:08:56.666 ******** 2026-04-02 00:54:02.218103 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.218109 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.218116 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.218121 | orchestrator | 2026-04-02 00:54:02.218124 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-02 00:54:02.218128 | orchestrator | Thursday 02 April 2026 00:53:01 +0000 (0:00:00.511) 0:08:57.178 ******** 2026-04-02 00:54:02.218158 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:54:02.218164 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:54:02.218176 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:54:02.218182 | orchestrator | 2026-04-02 00:54:02.218188 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-02 00:54:02.218194 | orchestrator | Thursday 02 April 2026 00:53:02 +0000 (0:00:00.720) 0:08:57.898 ******** 2026-04-02 00:54:02.218199 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:54:02.218205 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:54:02.218211 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:54:02.218217 | orchestrator | 2026-04-02 00:54:02.218222 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-02 00:54:02.218228 | orchestrator | Thursday 02 April 2026 00:53:03 +0000 (0:00:00.728) 0:08:58.627 ******** 2026-04-02 00:54:02.218235 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:54:02.218241 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:54:02.218247 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:54:02.218254 | orchestrator | 2026-04-02 00:54:02.218261 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-02 00:54:02.218267 | orchestrator | Thursday 02 April 2026 00:53:04 +0000 (0:00:00.777) 0:08:59.404 ******** 2026-04-02 00:54:02.218273 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.218277 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.218280 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.218284 | orchestrator | 2026-04-02 00:54:02.218288 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-02 00:54:02.218292 | orchestrator | Thursday 02 April 2026 00:53:04 +0000 (0:00:00.529) 0:08:59.934 ******** 2026-04-02 00:54:02.218296 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.218299 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.218303 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.218307 | orchestrator | 2026-04-02 00:54:02.218311 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-02 00:54:02.218319 | orchestrator | Thursday 02 April 2026 00:53:05 +0000 (0:00:00.297) 0:09:00.231 ******** 2026-04-02 00:54:02.218324 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.218327 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.218331 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.218335 | orchestrator | 2026-04-02 00:54:02.218338 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-02 00:54:02.218342 | orchestrator | Thursday 02 April 2026 00:53:05 +0000 (0:00:00.329) 0:09:00.561 ******** 2026-04-02 00:54:02.218346 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:54:02.218350 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:54:02.218353 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:54:02.218357 | orchestrator | 2026-04-02 00:54:02.218361 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-02 00:54:02.218365 | orchestrator | Thursday 02 April 2026 00:53:06 +0000 (0:00:00.672) 0:09:01.234 ******** 2026-04-02 00:54:02.218369 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:54:02.218372 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:54:02.218376 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:54:02.218380 | orchestrator | 2026-04-02 00:54:02.218383 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-02 00:54:02.218387 | orchestrator | Thursday 02 April 2026 00:53:07 +0000 (0:00:01.014) 0:09:02.249 ******** 2026-04-02 00:54:02.218391 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.218395 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.218398 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.218402 | orchestrator | 2026-04-02 00:54:02.218406 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-02 00:54:02.218410 | orchestrator | Thursday 02 April 2026 00:53:07 +0000 (0:00:00.299) 0:09:02.548 ******** 2026-04-02 00:54:02.218414 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.218418 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.218421 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.218429 | orchestrator | 2026-04-02 00:54:02.218433 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-02 00:54:02.218436 | orchestrator | Thursday 02 April 2026 00:53:07 +0000 (0:00:00.304) 0:09:02.853 ******** 2026-04-02 00:54:02.218440 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:54:02.218444 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:54:02.218448 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:54:02.218451 | orchestrator | 2026-04-02 00:54:02.218455 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-02 00:54:02.218459 | orchestrator | Thursday 02 April 2026 00:53:07 +0000 (0:00:00.311) 0:09:03.165 ******** 2026-04-02 00:54:02.218463 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:54:02.218467 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:54:02.218470 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:54:02.218474 | orchestrator | 2026-04-02 00:54:02.218478 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-02 00:54:02.218481 | orchestrator | Thursday 02 April 2026 00:53:08 +0000 (0:00:00.555) 0:09:03.721 ******** 2026-04-02 00:54:02.218485 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:54:02.218489 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:54:02.218493 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:54:02.218496 | orchestrator | 2026-04-02 00:54:02.218500 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-02 00:54:02.218504 | orchestrator | Thursday 02 April 2026 00:53:08 +0000 (0:00:00.319) 0:09:04.040 ******** 2026-04-02 00:54:02.218508 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.218512 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.218515 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.218519 | orchestrator | 2026-04-02 00:54:02.218523 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-02 00:54:02.218529 | orchestrator | Thursday 02 April 2026 00:53:09 +0000 (0:00:00.291) 0:09:04.331 ******** 2026-04-02 00:54:02.218533 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.218537 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.218541 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.218545 | orchestrator | 2026-04-02 00:54:02.218548 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-02 00:54:02.218552 | orchestrator | Thursday 02 April 2026 00:53:09 +0000 (0:00:00.321) 0:09:04.652 ******** 2026-04-02 00:54:02.218556 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.218560 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.218564 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.218567 | orchestrator | 2026-04-02 00:54:02.218571 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-02 00:54:02.218575 | orchestrator | Thursday 02 April 2026 00:53:09 +0000 (0:00:00.517) 0:09:05.170 ******** 2026-04-02 00:54:02.218579 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:54:02.218583 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:54:02.218586 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:54:02.218590 | orchestrator | 2026-04-02 00:54:02.218594 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-02 00:54:02.218598 | orchestrator | Thursday 02 April 2026 00:53:10 +0000 (0:00:00.335) 0:09:05.505 ******** 2026-04-02 00:54:02.218601 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:54:02.218605 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:54:02.218609 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:54:02.218613 | orchestrator | 2026-04-02 00:54:02.218616 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-04-02 00:54:02.218620 | orchestrator | Thursday 02 April 2026 00:53:10 +0000 (0:00:00.458) 0:09:05.964 ******** 2026-04-02 00:54:02.218624 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-02 00:54:02.218628 | orchestrator | 2026-04-02 00:54:02.218632 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-04-02 00:54:02.218636 | orchestrator | Thursday 02 April 2026 00:53:11 +0000 (0:00:00.586) 0:09:06.550 ******** 2026-04-02 00:54:02.218642 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-02 00:54:02.218646 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-02 00:54:02.218650 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-02 00:54:02.218654 | orchestrator | 2026-04-02 00:54:02.218660 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-04-02 00:54:02.218664 | orchestrator | Thursday 02 April 2026 00:53:13 +0000 (0:00:01.782) 0:09:08.332 ******** 2026-04-02 00:54:02.218667 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-02 00:54:02.218671 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-02 00:54:02.218675 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-02 00:54:02.218679 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-02 00:54:02.218683 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:54:02.218687 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:54:02.218690 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-02 00:54:02.218694 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-02 00:54:02.218698 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:54:02.218702 | orchestrator | 2026-04-02 00:54:02.218705 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-04-02 00:54:02.218709 | orchestrator | Thursday 02 April 2026 00:53:14 +0000 (0:00:01.207) 0:09:09.540 ******** 2026-04-02 00:54:02.218713 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.218717 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.218720 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.218724 | orchestrator | 2026-04-02 00:54:02.218728 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-04-02 00:54:02.218732 | orchestrator | Thursday 02 April 2026 00:53:14 +0000 (0:00:00.271) 0:09:09.811 ******** 2026-04-02 00:54:02.218736 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-02 00:54:02.218739 | orchestrator | 2026-04-02 00:54:02.218743 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-04-02 00:54:02.218747 | orchestrator | Thursday 02 April 2026 00:53:15 +0000 (0:00:00.623) 0:09:10.435 ******** 2026-04-02 00:54:02.218751 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-02 00:54:02.218756 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-02 00:54:02.218759 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-02 00:54:02.218763 | orchestrator | 2026-04-02 00:54:02.218767 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-04-02 00:54:02.218771 | orchestrator | Thursday 02 April 2026 00:53:15 +0000 (0:00:00.763) 0:09:11.199 ******** 2026-04-02 00:54:02.218775 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-02 00:54:02.218778 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-02 00:54:02.218782 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-02 00:54:02.218786 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-02 00:54:02.218790 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-02 00:54:02.218796 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-02 00:54:02.218800 | orchestrator | 2026-04-02 00:54:02.218805 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-04-02 00:54:02.218809 | orchestrator | Thursday 02 April 2026 00:53:19 +0000 (0:00:03.604) 0:09:14.803 ******** 2026-04-02 00:54:02.218813 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-02 00:54:02.218817 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-02 00:54:02.218820 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-02 00:54:02.218824 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-02 00:54:02.218828 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-02 00:54:02.218832 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-02 00:54:02.218835 | orchestrator | 2026-04-02 00:54:02.218839 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-04-02 00:54:02.218843 | orchestrator | Thursday 02 April 2026 00:53:21 +0000 (0:00:02.208) 0:09:17.012 ******** 2026-04-02 00:54:02.218847 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-02 00:54:02.218850 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:54:02.218854 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-02 00:54:02.218858 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:54:02.218862 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-02 00:54:02.218866 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:54:02.218869 | orchestrator | 2026-04-02 00:54:02.218873 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-04-02 00:54:02.218877 | orchestrator | Thursday 02 April 2026 00:53:23 +0000 (0:00:01.250) 0:09:18.262 ******** 2026-04-02 00:54:02.218881 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-04-02 00:54:02.218885 | orchestrator | 2026-04-02 00:54:02.218888 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-04-02 00:54:02.218892 | orchestrator | Thursday 02 April 2026 00:53:23 +0000 (0:00:00.216) 0:09:18.479 ******** 2026-04-02 00:54:02.218898 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-02 00:54:02.218902 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-02 00:54:02.218906 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-02 00:54:02.218910 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-02 00:54:02.218913 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-02 00:54:02.218917 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.218921 | orchestrator | 2026-04-02 00:54:02.218925 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-04-02 00:54:02.218929 | orchestrator | Thursday 02 April 2026 00:53:23 +0000 (0:00:00.603) 0:09:19.083 ******** 2026-04-02 00:54:02.218933 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-02 00:54:02.218936 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-02 00:54:02.218940 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-02 00:54:02.218944 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-02 00:54:02.218948 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-02 00:54:02.218954 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.218958 | orchestrator | 2026-04-02 00:54:02.218962 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-04-02 00:54:02.218966 | orchestrator | Thursday 02 April 2026 00:53:24 +0000 (0:00:00.579) 0:09:19.663 ******** 2026-04-02 00:54:02.218970 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-02 00:54:02.218974 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-02 00:54:02.218977 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-02 00:54:02.218981 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-02 00:54:02.218987 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-02 00:54:02.218991 | orchestrator | 2026-04-02 00:54:02.218995 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-04-02 00:54:02.218999 | orchestrator | Thursday 02 April 2026 00:53:46 +0000 (0:00:22.479) 0:09:42.142 ******** 2026-04-02 00:54:02.219003 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.219006 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.219010 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.219014 | orchestrator | 2026-04-02 00:54:02.219018 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-04-02 00:54:02.219022 | orchestrator | Thursday 02 April 2026 00:53:47 +0000 (0:00:00.303) 0:09:42.445 ******** 2026-04-02 00:54:02.219025 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.219029 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.219033 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.219037 | orchestrator | 2026-04-02 00:54:02.219040 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-04-02 00:54:02.219045 | orchestrator | Thursday 02 April 2026 00:53:47 +0000 (0:00:00.576) 0:09:43.021 ******** 2026-04-02 00:54:02.219052 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-02 00:54:02.219058 | orchestrator | 2026-04-02 00:54:02.219064 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-04-02 00:54:02.219070 | orchestrator | Thursday 02 April 2026 00:53:48 +0000 (0:00:00.590) 0:09:43.612 ******** 2026-04-02 00:54:02.219076 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-02 00:54:02.219082 | orchestrator | 2026-04-02 00:54:02.219089 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-04-02 00:54:02.219096 | orchestrator | Thursday 02 April 2026 00:53:49 +0000 (0:00:00.720) 0:09:44.333 ******** 2026-04-02 00:54:02.219102 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:54:02.219109 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:54:02.219114 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:54:02.219118 | orchestrator | 2026-04-02 00:54:02.219122 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-04-02 00:54:02.219128 | orchestrator | Thursday 02 April 2026 00:53:50 +0000 (0:00:01.334) 0:09:45.667 ******** 2026-04-02 00:54:02.219141 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:54:02.219145 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:54:02.219149 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:54:02.219152 | orchestrator | 2026-04-02 00:54:02.219156 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-04-02 00:54:02.219165 | orchestrator | Thursday 02 April 2026 00:53:51 +0000 (0:00:01.180) 0:09:46.848 ******** 2026-04-02 00:54:02.219169 | orchestrator | changed: [testbed-node-3] 2026-04-02 00:54:02.219173 | orchestrator | changed: [testbed-node-4] 2026-04-02 00:54:02.219176 | orchestrator | changed: [testbed-node-5] 2026-04-02 00:54:02.219180 | orchestrator | 2026-04-02 00:54:02.219184 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-04-02 00:54:02.219188 | orchestrator | Thursday 02 April 2026 00:53:53 +0000 (0:00:01.911) 0:09:48.760 ******** 2026-04-02 00:54:02.219191 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-02 00:54:02.219195 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-02 00:54:02.219199 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-02 00:54:02.219203 | orchestrator | 2026-04-02 00:54:02.219207 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-02 00:54:02.219211 | orchestrator | Thursday 02 April 2026 00:53:56 +0000 (0:00:02.781) 0:09:51.542 ******** 2026-04-02 00:54:02.219214 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.219218 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.219222 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.219226 | orchestrator | 2026-04-02 00:54:02.219229 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-04-02 00:54:02.219233 | orchestrator | Thursday 02 April 2026 00:53:56 +0000 (0:00:00.333) 0:09:51.875 ******** 2026-04-02 00:54:02.219237 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-02 00:54:02.219241 | orchestrator | 2026-04-02 00:54:02.219245 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-04-02 00:54:02.219249 | orchestrator | Thursday 02 April 2026 00:53:57 +0000 (0:00:00.770) 0:09:52.646 ******** 2026-04-02 00:54:02.219252 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:54:02.219256 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:54:02.219260 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:54:02.219264 | orchestrator | 2026-04-02 00:54:02.219268 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-04-02 00:54:02.219271 | orchestrator | Thursday 02 April 2026 00:53:57 +0000 (0:00:00.305) 0:09:52.951 ******** 2026-04-02 00:54:02.219275 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.219279 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:54:02.219283 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:54:02.219287 | orchestrator | 2026-04-02 00:54:02.219290 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-04-02 00:54:02.219294 | orchestrator | Thursday 02 April 2026 00:53:58 +0000 (0:00:00.331) 0:09:53.283 ******** 2026-04-02 00:54:02.219298 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-02 00:54:02.219302 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-02 00:54:02.219306 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-02 00:54:02.219312 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:54:02.219316 | orchestrator | 2026-04-02 00:54:02.219319 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-04-02 00:54:02.219323 | orchestrator | Thursday 02 April 2026 00:53:59 +0000 (0:00:01.095) 0:09:54.379 ******** 2026-04-02 00:54:02.219327 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:54:02.219331 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:54:02.219334 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:54:02.219338 | orchestrator | 2026-04-02 00:54:02.219342 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-02 00:54:02.219346 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-04-02 00:54:02.219352 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-04-02 00:54:02.219356 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-04-02 00:54:02.219360 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-04-02 00:54:02.219364 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-04-02 00:54:02.219368 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-04-02 00:54:02.219372 | orchestrator | 2026-04-02 00:54:02.219375 | orchestrator | 2026-04-02 00:54:02.219379 | orchestrator | 2026-04-02 00:54:02.219383 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-02 00:54:02.219387 | orchestrator | Thursday 02 April 2026 00:53:59 +0000 (0:00:00.261) 0:09:54.640 ******** 2026-04-02 00:54:02.219391 | orchestrator | =============================================================================== 2026-04-02 00:54:02.219396 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 57.33s 2026-04-02 00:54:02.219400 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 35.29s 2026-04-02 00:54:02.219404 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 22.48s 2026-04-02 00:54:02.219408 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.51s 2026-04-02 00:54:02.219412 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 13.44s 2026-04-02 00:54:02.219416 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.43s 2026-04-02 00:54:02.219420 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 10.60s 2026-04-02 00:54:02.219423 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node --------------------- 8.26s 2026-04-02 00:54:02.219427 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 6.38s 2026-04-02 00:54:02.219431 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.27s 2026-04-02 00:54:02.219435 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 5.96s 2026-04-02 00:54:02.219439 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 5.40s 2026-04-02 00:54:02.219443 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 4.57s 2026-04-02 00:54:02.219447 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 4.18s 2026-04-02 00:54:02.219450 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 4.03s 2026-04-02 00:54:02.219454 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 3.60s 2026-04-02 00:54:02.219458 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.42s 2026-04-02 00:54:02.219462 | orchestrator | ceph-osd : Apply operating system tuning -------------------------------- 3.20s 2026-04-02 00:54:02.219466 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 3.19s 2026-04-02 00:54:02.219469 | orchestrator | ceph-config : Generate Ceph file ---------------------------------------- 3.17s 2026-04-02 00:54:02.219473 | orchestrator | 2026-04-02 00:54:02 | INFO  | Task 908cb2cd-27b5-489c-b37b-649eef45bfd2 is in state STARTED 2026-04-02 00:54:02.219477 | orchestrator | 2026-04-02 00:54:02 | INFO  | Task 81f56938-c114-43e6-bcf1-ee86e0091ca3 is in state STARTED 2026-04-02 00:54:02.219481 | orchestrator | 2026-04-02 00:54:02 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:54:05.253092 | orchestrator | 2026-04-02 00:54:05 | INFO  | Task bae6ffe3-bc43-4ada-bbec-72486e2eabb4 is in state STARTED 2026-04-02 00:54:05.257943 | orchestrator | 2026-04-02 00:54:05 | INFO  | Task 908cb2cd-27b5-489c-b37b-649eef45bfd2 is in state STARTED 2026-04-02 00:54:05.259887 | orchestrator | 2026-04-02 00:54:05 | INFO  | Task 81f56938-c114-43e6-bcf1-ee86e0091ca3 is in state STARTED 2026-04-02 00:54:05.260236 | orchestrator | 2026-04-02 00:54:05 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:54:08.299513 | orchestrator | 2026-04-02 00:54:08 | INFO  | Task bae6ffe3-bc43-4ada-bbec-72486e2eabb4 is in state STARTED 2026-04-02 00:54:08.302693 | orchestrator | 2026-04-02 00:54:08 | INFO  | Task 908cb2cd-27b5-489c-b37b-649eef45bfd2 is in state STARTED 2026-04-02 00:54:08.304738 | orchestrator | 2026-04-02 00:54:08 | INFO  | Task 81f56938-c114-43e6-bcf1-ee86e0091ca3 is in state STARTED 2026-04-02 00:54:08.305652 | orchestrator | 2026-04-02 00:54:08 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:54:11.354053 | orchestrator | 2026-04-02 00:54:11 | INFO  | Task bae6ffe3-bc43-4ada-bbec-72486e2eabb4 is in state STARTED 2026-04-02 00:54:11.358241 | orchestrator | 2026-04-02 00:54:11 | INFO  | Task 908cb2cd-27b5-489c-b37b-649eef45bfd2 is in state STARTED 2026-04-02 00:54:11.359229 | orchestrator | 2026-04-02 00:54:11 | INFO  | Task 81f56938-c114-43e6-bcf1-ee86e0091ca3 is in state STARTED 2026-04-02 00:54:11.359450 | orchestrator | 2026-04-02 00:54:11 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:54:14.410378 | orchestrator | 2026-04-02 00:54:14 | INFO  | Task bae6ffe3-bc43-4ada-bbec-72486e2eabb4 is in state STARTED 2026-04-02 00:54:14.411042 | orchestrator | 2026-04-02 00:54:14 | INFO  | Task 908cb2cd-27b5-489c-b37b-649eef45bfd2 is in state STARTED 2026-04-02 00:54:14.412216 | orchestrator | 2026-04-02 00:54:14 | INFO  | Task 81f56938-c114-43e6-bcf1-ee86e0091ca3 is in state STARTED 2026-04-02 00:54:14.412256 | orchestrator | 2026-04-02 00:54:14 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:54:17.451890 | orchestrator | 2026-04-02 00:54:17 | INFO  | Task bae6ffe3-bc43-4ada-bbec-72486e2eabb4 is in state STARTED 2026-04-02 00:54:17.452006 | orchestrator | 2026-04-02 00:54:17 | INFO  | Task 908cb2cd-27b5-489c-b37b-649eef45bfd2 is in state STARTED 2026-04-02 00:54:17.452802 | orchestrator | 2026-04-02 00:54:17 | INFO  | Task 81f56938-c114-43e6-bcf1-ee86e0091ca3 is in state STARTED 2026-04-02 00:54:17.452827 | orchestrator | 2026-04-02 00:54:17 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:54:20.494852 | orchestrator | 2026-04-02 00:54:20 | INFO  | Task bae6ffe3-bc43-4ada-bbec-72486e2eabb4 is in state STARTED 2026-04-02 00:54:20.496494 | orchestrator | 2026-04-02 00:54:20 | INFO  | Task 908cb2cd-27b5-489c-b37b-649eef45bfd2 is in state STARTED 2026-04-02 00:54:20.499240 | orchestrator | 2026-04-02 00:54:20 | INFO  | Task 81f56938-c114-43e6-bcf1-ee86e0091ca3 is in state STARTED 2026-04-02 00:54:20.499469 | orchestrator | 2026-04-02 00:54:20 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:54:23.541823 | orchestrator | 2026-04-02 00:54:23 | INFO  | Task bae6ffe3-bc43-4ada-bbec-72486e2eabb4 is in state STARTED 2026-04-02 00:54:23.542950 | orchestrator | 2026-04-02 00:54:23 | INFO  | Task 908cb2cd-27b5-489c-b37b-649eef45bfd2 is in state STARTED 2026-04-02 00:54:23.544799 | orchestrator | 2026-04-02 00:54:23 | INFO  | Task 81f56938-c114-43e6-bcf1-ee86e0091ca3 is in state STARTED 2026-04-02 00:54:23.545514 | orchestrator | 2026-04-02 00:54:23 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:54:26.590477 | orchestrator | 2026-04-02 00:54:26 | INFO  | Task bae6ffe3-bc43-4ada-bbec-72486e2eabb4 is in state STARTED 2026-04-02 00:54:26.591311 | orchestrator | 2026-04-02 00:54:26 | INFO  | Task 908cb2cd-27b5-489c-b37b-649eef45bfd2 is in state STARTED 2026-04-02 00:54:26.592095 | orchestrator | 2026-04-02 00:54:26 | INFO  | Task 81f56938-c114-43e6-bcf1-ee86e0091ca3 is in state STARTED 2026-04-02 00:54:26.592193 | orchestrator | 2026-04-02 00:54:26 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:54:29.630404 | orchestrator | 2026-04-02 00:54:29 | INFO  | Task bae6ffe3-bc43-4ada-bbec-72486e2eabb4 is in state STARTED 2026-04-02 00:54:29.631675 | orchestrator | 2026-04-02 00:54:29 | INFO  | Task 908cb2cd-27b5-489c-b37b-649eef45bfd2 is in state STARTED 2026-04-02 00:54:29.633613 | orchestrator | 2026-04-02 00:54:29 | INFO  | Task 81f56938-c114-43e6-bcf1-ee86e0091ca3 is in state STARTED 2026-04-02 00:54:29.633667 | orchestrator | 2026-04-02 00:54:29 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:54:32.683637 | orchestrator | 2026-04-02 00:54:32 | INFO  | Task bae6ffe3-bc43-4ada-bbec-72486e2eabb4 is in state STARTED 2026-04-02 00:54:32.686334 | orchestrator | 2026-04-02 00:54:32 | INFO  | Task 908cb2cd-27b5-489c-b37b-649eef45bfd2 is in state STARTED 2026-04-02 00:54:32.688523 | orchestrator | 2026-04-02 00:54:32 | INFO  | Task 81f56938-c114-43e6-bcf1-ee86e0091ca3 is in state STARTED 2026-04-02 00:54:32.688872 | orchestrator | 2026-04-02 00:54:32 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:54:35.734626 | orchestrator | 2026-04-02 00:54:35 | INFO  | Task bae6ffe3-bc43-4ada-bbec-72486e2eabb4 is in state STARTED 2026-04-02 00:54:35.736405 | orchestrator | 2026-04-02 00:54:35 | INFO  | Task 908cb2cd-27b5-489c-b37b-649eef45bfd2 is in state STARTED 2026-04-02 00:54:35.738426 | orchestrator | 2026-04-02 00:54:35 | INFO  | Task 81f56938-c114-43e6-bcf1-ee86e0091ca3 is in state STARTED 2026-04-02 00:54:35.738491 | orchestrator | 2026-04-02 00:54:35 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:54:38.780681 | orchestrator | 2026-04-02 00:54:38 | INFO  | Task bae6ffe3-bc43-4ada-bbec-72486e2eabb4 is in state STARTED 2026-04-02 00:54:38.782092 | orchestrator | 2026-04-02 00:54:38 | INFO  | Task 908cb2cd-27b5-489c-b37b-649eef45bfd2 is in state STARTED 2026-04-02 00:54:38.783777 | orchestrator | 2026-04-02 00:54:38 | INFO  | Task 81f56938-c114-43e6-bcf1-ee86e0091ca3 is in state STARTED 2026-04-02 00:54:38.783809 | orchestrator | 2026-04-02 00:54:38 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:54:41.826306 | orchestrator | 2026-04-02 00:54:41 | INFO  | Task bae6ffe3-bc43-4ada-bbec-72486e2eabb4 is in state STARTED 2026-04-02 00:54:41.827606 | orchestrator | 2026-04-02 00:54:41 | INFO  | Task 908cb2cd-27b5-489c-b37b-649eef45bfd2 is in state STARTED 2026-04-02 00:54:41.829091 | orchestrator | 2026-04-02 00:54:41 | INFO  | Task 81f56938-c114-43e6-bcf1-ee86e0091ca3 is in state STARTED 2026-04-02 00:54:41.829164 | orchestrator | 2026-04-02 00:54:41 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:54:44.865585 | orchestrator | 2026-04-02 00:54:44 | INFO  | Task bae6ffe3-bc43-4ada-bbec-72486e2eabb4 is in state STARTED 2026-04-02 00:54:44.867287 | orchestrator | 2026-04-02 00:54:44 | INFO  | Task 908cb2cd-27b5-489c-b37b-649eef45bfd2 is in state STARTED 2026-04-02 00:54:44.868719 | orchestrator | 2026-04-02 00:54:44 | INFO  | Task 81f56938-c114-43e6-bcf1-ee86e0091ca3 is in state STARTED 2026-04-02 00:54:44.868866 | orchestrator | 2026-04-02 00:54:44 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:54:47.914515 | orchestrator | 2026-04-02 00:54:47 | INFO  | Task bae6ffe3-bc43-4ada-bbec-72486e2eabb4 is in state STARTED 2026-04-02 00:54:47.915210 | orchestrator | 2026-04-02 00:54:47 | INFO  | Task 908cb2cd-27b5-489c-b37b-649eef45bfd2 is in state STARTED 2026-04-02 00:54:47.916237 | orchestrator | 2026-04-02 00:54:47 | INFO  | Task 81f56938-c114-43e6-bcf1-ee86e0091ca3 is in state STARTED 2026-04-02 00:54:47.916317 | orchestrator | 2026-04-02 00:54:47 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:54:50.964828 | orchestrator | 2026-04-02 00:54:50 | INFO  | Task bae6ffe3-bc43-4ada-bbec-72486e2eabb4 is in state STARTED 2026-04-02 00:54:50.966697 | orchestrator | 2026-04-02 00:54:50 | INFO  | Task 908cb2cd-27b5-489c-b37b-649eef45bfd2 is in state STARTED 2026-04-02 00:54:50.968916 | orchestrator | 2026-04-02 00:54:50 | INFO  | Task 81f56938-c114-43e6-bcf1-ee86e0091ca3 is in state STARTED 2026-04-02 00:54:50.968999 | orchestrator | 2026-04-02 00:54:50 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:54:54.024551 | orchestrator | 2026-04-02 00:54:54 | INFO  | Task bae6ffe3-bc43-4ada-bbec-72486e2eabb4 is in state STARTED 2026-04-02 00:54:54.026519 | orchestrator | 2026-04-02 00:54:54 | INFO  | Task 908cb2cd-27b5-489c-b37b-649eef45bfd2 is in state STARTED 2026-04-02 00:54:54.029424 | orchestrator | 2026-04-02 00:54:54 | INFO  | Task 81f56938-c114-43e6-bcf1-ee86e0091ca3 is in state STARTED 2026-04-02 00:54:54.029510 | orchestrator | 2026-04-02 00:54:54 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:54:57.065905 | orchestrator | 2026-04-02 00:54:57 | INFO  | Task bae6ffe3-bc43-4ada-bbec-72486e2eabb4 is in state SUCCESS 2026-04-02 00:54:57.066900 | orchestrator | 2026-04-02 00:54:57.066931 | orchestrator | 2026-04-02 00:54:57.066936 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-02 00:54:57.066942 | orchestrator | 2026-04-02 00:54:57.066946 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-02 00:54:57.066951 | orchestrator | Thursday 02 April 2026 00:52:22 +0000 (0:00:00.319) 0:00:00.319 ******** 2026-04-02 00:54:57.066956 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:54:57.066961 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:54:57.066966 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:54:57.066970 | orchestrator | 2026-04-02 00:54:57.066975 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-02 00:54:57.066979 | orchestrator | Thursday 02 April 2026 00:52:22 +0000 (0:00:00.322) 0:00:00.642 ******** 2026-04-02 00:54:57.066984 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-04-02 00:54:57.066989 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-04-02 00:54:57.066993 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-04-02 00:54:57.066997 | orchestrator | 2026-04-02 00:54:57.067002 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-04-02 00:54:57.067007 | orchestrator | 2026-04-02 00:54:57.067011 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-02 00:54:57.067024 | orchestrator | Thursday 02 April 2026 00:52:22 +0000 (0:00:00.318) 0:00:00.961 ******** 2026-04-02 00:54:57.067028 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 00:54:57.067033 | orchestrator | 2026-04-02 00:54:57.067037 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-04-02 00:54:57.067042 | orchestrator | Thursday 02 April 2026 00:52:23 +0000 (0:00:00.602) 0:00:01.563 ******** 2026-04-02 00:54:57.067046 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-02 00:54:57.067050 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-02 00:54:57.067055 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-02 00:54:57.067059 | orchestrator | 2026-04-02 00:54:57.067064 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-04-02 00:54:57.067068 | orchestrator | Thursday 02 April 2026 00:52:24 +0000 (0:00:01.044) 0:00:02.608 ******** 2026-04-02 00:54:57.067091 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-02 00:54:57.067098 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-02 00:54:57.067109 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-02 00:54:57.067235 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-02 00:54:57.067245 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-02 00:54:57.067254 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-02 00:54:57.067259 | orchestrator | 2026-04-02 00:54:57.067263 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-02 00:54:57.067267 | orchestrator | Thursday 02 April 2026 00:52:25 +0000 (0:00:01.348) 0:00:03.956 ******** 2026-04-02 00:54:57.067271 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 00:54:57.067281 | orchestrator | 2026-04-02 00:54:57.067289 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-04-02 00:54:57.067293 | orchestrator | Thursday 02 April 2026 00:52:26 +0000 (0:00:00.487) 0:00:04.444 ******** 2026-04-02 00:54:57.067301 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-02 00:54:57.067310 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-02 00:54:57.067317 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-02 00:54:57.067322 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-02 00:54:57.067329 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-02 00:54:57.067336 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-02 00:54:57.067343 | orchestrator | 2026-04-02 00:54:57.067347 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-04-02 00:54:57.067350 | orchestrator | Thursday 02 April 2026 00:52:29 +0000 (0:00:02.740) 0:00:07.184 ******** 2026-04-02 00:54:57.067354 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-02 00:54:57.067359 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-02 00:54:57.067363 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:57.067367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-02 00:54:57.067376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-02 00:54:57.067384 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:57.067391 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-02 00:54:57.067400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-02 00:54:57.067409 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:57.067415 | orchestrator | 2026-04-02 00:54:57.067421 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-04-02 00:54:57.067427 | orchestrator | Thursday 02 April 2026 00:52:29 +0000 (0:00:00.586) 0:00:07.770 ******** 2026-04-02 00:54:57.067434 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-02 00:54:57.067448 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-02 00:54:57.067460 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:57.067468 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-02 00:54:57.067475 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-02 00:54:57.067481 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:57.067487 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-02 00:54:57.067499 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-02 00:54:57.067510 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:57.067515 | orchestrator | 2026-04-02 00:54:57.067522 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-04-02 00:54:57.067535 | orchestrator | Thursday 02 April 2026 00:52:30 +0000 (0:00:00.784) 0:00:08.555 ******** 2026-04-02 00:54:57.067542 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-02 00:54:57.067549 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-02 00:54:57.067556 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-02 00:54:57.067567 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-02 00:54:57.067582 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-02 00:54:57.067590 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-02 00:54:57.067597 | orchestrator | 2026-04-02 00:54:57.067601 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-04-02 00:54:57.067605 | orchestrator | Thursday 02 April 2026 00:52:33 +0000 (0:00:02.684) 0:00:11.239 ******** 2026-04-02 00:54:57.067609 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:54:57.067616 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:54:57.067622 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:54:57.067628 | orchestrator | 2026-04-02 00:54:57.067635 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-04-02 00:54:57.067641 | orchestrator | Thursday 02 April 2026 00:52:35 +0000 (0:00:02.700) 0:00:13.940 ******** 2026-04-02 00:54:57.067648 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:54:57.067654 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:54:57.067661 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:54:57.067667 | orchestrator | 2026-04-02 00:54:57.067673 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2026-04-02 00:54:57.067680 | orchestrator | Thursday 02 April 2026 00:52:37 +0000 (0:00:01.601) 0:00:15.541 ******** 2026-04-02 00:54:57.067686 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-02 00:54:57.067701 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-02 00:54:57.067711 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-02 00:54:57.067718 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-02 00:54:57.067726 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-02 00:54:57.067741 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-02 00:54:57.067749 | orchestrator | 2026-04-02 00:54:57.067756 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-02 00:54:57.067762 | orchestrator | Thursday 02 April 2026 00:52:39 +0000 (0:00:02.109) 0:00:17.650 ******** 2026-04-02 00:54:57.067769 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:57.067778 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:54:57.067784 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:54:57.067790 | orchestrator | 2026-04-02 00:54:57.067797 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-02 00:54:57.067803 | orchestrator | Thursday 02 April 2026 00:52:39 +0000 (0:00:00.454) 0:00:18.105 ******** 2026-04-02 00:54:57.067809 | orchestrator | 2026-04-02 00:54:57.067816 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-02 00:54:57.067822 | orchestrator | Thursday 02 April 2026 00:52:40 +0000 (0:00:00.066) 0:00:18.171 ******** 2026-04-02 00:54:57.067828 | orchestrator | 2026-04-02 00:54:57.067835 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-02 00:54:57.067841 | orchestrator | Thursday 02 April 2026 00:52:40 +0000 (0:00:00.060) 0:00:18.232 ******** 2026-04-02 00:54:57.067848 | orchestrator | 2026-04-02 00:54:57.067854 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-04-02 00:54:57.067860 | orchestrator | Thursday 02 April 2026 00:52:40 +0000 (0:00:00.071) 0:00:18.303 ******** 2026-04-02 00:54:57.067867 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:57.067873 | orchestrator | 2026-04-02 00:54:57.067879 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-04-02 00:54:57.067886 | orchestrator | Thursday 02 April 2026 00:52:40 +0000 (0:00:00.247) 0:00:18.550 ******** 2026-04-02 00:54:57.067893 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:54:57.067899 | orchestrator | 2026-04-02 00:54:57.067905 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-04-02 00:54:57.067912 | orchestrator | Thursday 02 April 2026 00:52:40 +0000 (0:00:00.211) 0:00:18.762 ******** 2026-04-02 00:54:57.067919 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:54:57.067925 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:54:57.067932 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:54:57.067938 | orchestrator | 2026-04-02 00:54:57.067945 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-04-02 00:54:57.067951 | orchestrator | Thursday 02 April 2026 00:53:32 +0000 (0:00:51.859) 0:01:10.621 ******** 2026-04-02 00:54:57.067958 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:54:57.067964 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:54:57.067971 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:54:57.067977 | orchestrator | 2026-04-02 00:54:57.067984 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-02 00:54:57.067990 | orchestrator | Thursday 02 April 2026 00:54:43 +0000 (0:01:10.739) 0:02:21.361 ******** 2026-04-02 00:54:57.068001 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 00:54:57.068008 | orchestrator | 2026-04-02 00:54:57.068015 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-04-02 00:54:57.068021 | orchestrator | Thursday 02 April 2026 00:54:43 +0000 (0:00:00.590) 0:02:21.952 ******** 2026-04-02 00:54:57.068028 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:54:57.068035 | orchestrator | 2026-04-02 00:54:57.068041 | orchestrator | TASK [opensearch : Wait for OpenSearch cluster to become healthy] ************** 2026-04-02 00:54:57.068048 | orchestrator | Thursday 02 April 2026 00:54:46 +0000 (0:00:02.642) 0:02:24.594 ******** 2026-04-02 00:54:57.068054 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:54:57.068061 | orchestrator | 2026-04-02 00:54:57.068067 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-04-02 00:54:57.068074 | orchestrator | Thursday 02 April 2026 00:54:48 +0000 (0:00:02.023) 0:02:26.617 ******** 2026-04-02 00:54:57.068080 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:54:57.068086 | orchestrator | 2026-04-02 00:54:57.068093 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-04-02 00:54:57.068100 | orchestrator | Thursday 02 April 2026 00:54:50 +0000 (0:00:02.283) 0:02:28.900 ******** 2026-04-02 00:54:57.068106 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:54:57.068129 | orchestrator | 2026-04-02 00:54:57.068136 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-04-02 00:54:57.068143 | orchestrator | Thursday 02 April 2026 00:54:53 +0000 (0:00:02.439) 0:02:31.340 ******** 2026-04-02 00:54:57.068149 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:54:57.068156 | orchestrator | 2026-04-02 00:54:57.068162 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-02 00:54:57.068169 | orchestrator | testbed-node-0 : ok=19  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-02 00:54:57.068177 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-02 00:54:57.068187 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-02 00:54:57.068194 | orchestrator | 2026-04-02 00:54:57.068201 | orchestrator | 2026-04-02 00:54:57.068207 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-02 00:54:57.068214 | orchestrator | Thursday 02 April 2026 00:54:55 +0000 (0:00:02.289) 0:02:33.629 ******** 2026-04-02 00:54:57.068220 | orchestrator | =============================================================================== 2026-04-02 00:54:57.068227 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 70.74s 2026-04-02 00:54:57.068233 | orchestrator | opensearch : Restart opensearch container ------------------------------ 51.86s 2026-04-02 00:54:57.068240 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.74s 2026-04-02 00:54:57.068246 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.70s 2026-04-02 00:54:57.068252 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.68s 2026-04-02 00:54:57.068259 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.64s 2026-04-02 00:54:57.068266 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.44s 2026-04-02 00:54:57.068272 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.29s 2026-04-02 00:54:57.068279 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.28s 2026-04-02 00:54:57.068285 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.11s 2026-04-02 00:54:57.068292 | orchestrator | opensearch : Wait for OpenSearch cluster to become healthy -------------- 2.02s 2026-04-02 00:54:57.068303 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.60s 2026-04-02 00:54:57.068309 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.35s 2026-04-02 00:54:57.068316 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 1.04s 2026-04-02 00:54:57.068348 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 0.78s 2026-04-02 00:54:57.068355 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.60s 2026-04-02 00:54:57.068362 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.59s 2026-04-02 00:54:57.068368 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 0.59s 2026-04-02 00:54:57.068374 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.49s 2026-04-02 00:54:57.068381 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.45s 2026-04-02 00:54:57.068386 | orchestrator | 2026-04-02 00:54:57 | INFO  | Task 908cb2cd-27b5-489c-b37b-649eef45bfd2 is in state STARTED 2026-04-02 00:54:57.068571 | orchestrator | 2026-04-02 00:54:57 | INFO  | Task 81f56938-c114-43e6-bcf1-ee86e0091ca3 is in state STARTED 2026-04-02 00:54:57.068585 | orchestrator | 2026-04-02 00:54:57 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:55:00.103538 | orchestrator | 2026-04-02 00:55:00 | INFO  | Task 908cb2cd-27b5-489c-b37b-649eef45bfd2 is in state STARTED 2026-04-02 00:55:00.104514 | orchestrator | 2026-04-02 00:55:00 | INFO  | Task 81f56938-c114-43e6-bcf1-ee86e0091ca3 is in state STARTED 2026-04-02 00:55:00.104550 | orchestrator | 2026-04-02 00:55:00 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:55:03.154580 | orchestrator | 2026-04-02 00:55:03 | INFO  | Task 908cb2cd-27b5-489c-b37b-649eef45bfd2 is in state STARTED 2026-04-02 00:55:03.155768 | orchestrator | 2026-04-02 00:55:03 | INFO  | Task 81f56938-c114-43e6-bcf1-ee86e0091ca3 is in state STARTED 2026-04-02 00:55:03.155872 | orchestrator | 2026-04-02 00:55:03 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:55:06.203251 | orchestrator | 2026-04-02 00:55:06 | INFO  | Task 908cb2cd-27b5-489c-b37b-649eef45bfd2 is in state STARTED 2026-04-02 00:55:06.204640 | orchestrator | 2026-04-02 00:55:06 | INFO  | Task 81f56938-c114-43e6-bcf1-ee86e0091ca3 is in state STARTED 2026-04-02 00:55:06.204669 | orchestrator | 2026-04-02 00:55:06 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:55:09.248522 | orchestrator | 2026-04-02 00:55:09 | INFO  | Task 908cb2cd-27b5-489c-b37b-649eef45bfd2 is in state STARTED 2026-04-02 00:55:09.250551 | orchestrator | 2026-04-02 00:55:09 | INFO  | Task 81f56938-c114-43e6-bcf1-ee86e0091ca3 is in state STARTED 2026-04-02 00:55:09.250615 | orchestrator | 2026-04-02 00:55:09 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:55:12.302319 | orchestrator | 2026-04-02 00:55:12 | INFO  | Task 908cb2cd-27b5-489c-b37b-649eef45bfd2 is in state STARTED 2026-04-02 00:55:12.303851 | orchestrator | 2026-04-02 00:55:12 | INFO  | Task 81f56938-c114-43e6-bcf1-ee86e0091ca3 is in state STARTED 2026-04-02 00:55:12.303928 | orchestrator | 2026-04-02 00:55:12 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:55:15.346308 | orchestrator | 2026-04-02 00:55:15 | INFO  | Task 908cb2cd-27b5-489c-b37b-649eef45bfd2 is in state STARTED 2026-04-02 00:55:15.347095 | orchestrator | 2026-04-02 00:55:15 | INFO  | Task 81f56938-c114-43e6-bcf1-ee86e0091ca3 is in state STARTED 2026-04-02 00:55:15.347184 | orchestrator | 2026-04-02 00:55:15 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:55:18.399300 | orchestrator | 2026-04-02 00:55:18 | INFO  | Task 908cb2cd-27b5-489c-b37b-649eef45bfd2 is in state STARTED 2026-04-02 00:55:18.401055 | orchestrator | 2026-04-02 00:55:18 | INFO  | Task 81f56938-c114-43e6-bcf1-ee86e0091ca3 is in state STARTED 2026-04-02 00:55:18.401209 | orchestrator | 2026-04-02 00:55:18 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:55:21.434984 | orchestrator | 2026-04-02 00:55:21 | INFO  | Task e2536911-a8b3-41de-af4b-9cacb64efd71 is in state STARTED 2026-04-02 00:55:21.436077 | orchestrator | 2026-04-02 00:55:21 | INFO  | Task bb29f833-7bd7-4614-8dfe-8d9119582ba0 is in state STARTED 2026-04-02 00:55:21.439782 | orchestrator | 2026-04-02 00:55:21 | INFO  | Task 908cb2cd-27b5-489c-b37b-649eef45bfd2 is in state SUCCESS 2026-04-02 00:55:21.441422 | orchestrator | 2026-04-02 00:55:21.441488 | orchestrator | 2026-04-02 00:55:21.441503 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2026-04-02 00:55:21.441518 | orchestrator | 2026-04-02 00:55:21.441532 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-04-02 00:55:21.441545 | orchestrator | Thursday 02 April 2026 00:52:21 +0000 (0:00:00.095) 0:00:00.095 ******** 2026-04-02 00:55:21.441560 | orchestrator | ok: [localhost] => { 2026-04-02 00:55:21.441574 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2026-04-02 00:55:21.441586 | orchestrator | } 2026-04-02 00:55:21.441599 | orchestrator | 2026-04-02 00:55:21.441613 | orchestrator | TASK [Check MariaDB service] *************************************************** 2026-04-02 00:55:21.441626 | orchestrator | Thursday 02 April 2026 00:52:21 +0000 (0:00:00.043) 0:00:00.139 ******** 2026-04-02 00:55:21.441640 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2026-04-02 00:55:21.441655 | orchestrator | ...ignoring 2026-04-02 00:55:21.441668 | orchestrator | 2026-04-02 00:55:21.441679 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2026-04-02 00:55:21.441691 | orchestrator | Thursday 02 April 2026 00:52:24 +0000 (0:00:02.908) 0:00:03.048 ******** 2026-04-02 00:55:21.441702 | orchestrator | skipping: [localhost] 2026-04-02 00:55:21.441714 | orchestrator | 2026-04-02 00:55:21.441725 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2026-04-02 00:55:21.441737 | orchestrator | Thursday 02 April 2026 00:52:24 +0000 (0:00:00.072) 0:00:03.121 ******** 2026-04-02 00:55:21.441774 | orchestrator | ok: [localhost] 2026-04-02 00:55:21.441786 | orchestrator | 2026-04-02 00:55:21.441799 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-02 00:55:21.441813 | orchestrator | 2026-04-02 00:55:21.441827 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-02 00:55:21.441842 | orchestrator | Thursday 02 April 2026 00:52:25 +0000 (0:00:00.228) 0:00:03.350 ******** 2026-04-02 00:55:21.441855 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:55:21.441868 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:55:21.441880 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:55:21.441892 | orchestrator | 2026-04-02 00:55:21.441906 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-02 00:55:21.441919 | orchestrator | Thursday 02 April 2026 00:52:25 +0000 (0:00:00.300) 0:00:03.650 ******** 2026-04-02 00:55:21.441932 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-04-02 00:55:21.441947 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-04-02 00:55:21.441961 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-04-02 00:55:21.441976 | orchestrator | 2026-04-02 00:55:21.441992 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-04-02 00:55:21.442006 | orchestrator | 2026-04-02 00:55:21.442084 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-04-02 00:55:21.442224 | orchestrator | Thursday 02 April 2026 00:52:25 +0000 (0:00:00.386) 0:00:04.037 ******** 2026-04-02 00:55:21.442247 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-02 00:55:21.442261 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-02 00:55:21.442304 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-02 00:55:21.442318 | orchestrator | 2026-04-02 00:55:21.442331 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-02 00:55:21.442344 | orchestrator | Thursday 02 April 2026 00:52:26 +0000 (0:00:00.345) 0:00:04.383 ******** 2026-04-02 00:55:21.442357 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 00:55:21.442373 | orchestrator | 2026-04-02 00:55:21.442386 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-04-02 00:55:21.442399 | orchestrator | Thursday 02 April 2026 00:52:26 +0000 (0:00:00.707) 0:00:05.090 ******** 2026-04-02 00:55:21.442463 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-02 00:55:21.442484 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-02 00:55:21.442515 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-02 00:55:21.442528 | orchestrator | 2026-04-02 00:55:21.442554 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-04-02 00:55:21.442568 | orchestrator | Thursday 02 April 2026 00:52:29 +0000 (0:00:03.015) 0:00:08.105 ******** 2026-04-02 00:55:21.442580 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:55:21.442593 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:55:21.442607 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:55:21.442620 | orchestrator | 2026-04-02 00:55:21.442633 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-04-02 00:55:21.442646 | orchestrator | Thursday 02 April 2026 00:52:30 +0000 (0:00:00.618) 0:00:08.724 ******** 2026-04-02 00:55:21.442658 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:55:21.442671 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:55:21.442684 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:55:21.442697 | orchestrator | 2026-04-02 00:55:21.442710 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-04-02 00:55:21.442725 | orchestrator | Thursday 02 April 2026 00:52:32 +0000 (0:00:01.587) 0:00:10.311 ******** 2026-04-02 00:55:21.442739 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-02 00:55:21.442777 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-02 00:55:21.442793 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-02 00:55:21.442816 | orchestrator | 2026-04-02 00:55:21.442828 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-04-02 00:55:21.442841 | orchestrator | Thursday 02 April 2026 00:52:36 +0000 (0:00:04.079) 0:00:14.391 ******** 2026-04-02 00:55:21.442854 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:55:21.442867 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:55:21.442879 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:55:21.442892 | orchestrator | 2026-04-02 00:55:21.442907 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-04-02 00:55:21.442919 | orchestrator | Thursday 02 April 2026 00:52:37 +0000 (0:00:01.254) 0:00:15.645 ******** 2026-04-02 00:55:21.442931 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:55:21.442945 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:55:21.442958 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:55:21.442971 | orchestrator | 2026-04-02 00:55:21.442984 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-02 00:55:21.442996 | orchestrator | Thursday 02 April 2026 00:52:41 +0000 (0:00:03.769) 0:00:19.415 ******** 2026-04-02 00:55:21.443009 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 00:55:21.443023 | orchestrator | 2026-04-02 00:55:21.443035 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-04-02 00:55:21.443047 | orchestrator | Thursday 02 April 2026 00:52:41 +0000 (0:00:00.633) 0:00:20.048 ******** 2026-04-02 00:55:21.443074 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-02 00:55:21.443126 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:55:21.443141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-02 00:55:21.443153 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:55:21.443178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-02 00:55:21.443191 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:55:21.443202 | orchestrator | 2026-04-02 00:55:21.443214 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-04-02 00:55:21.443227 | orchestrator | Thursday 02 April 2026 00:52:44 +0000 (0:00:02.603) 0:00:22.652 ******** 2026-04-02 00:55:21.443248 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-02 00:55:21.443262 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:55:21.443288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-02 00:55:21.443303 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:55:21.443319 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-02 00:55:21.443340 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:55:21.443353 | orchestrator | 2026-04-02 00:55:21.443365 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-04-02 00:55:21.443380 | orchestrator | Thursday 02 April 2026 00:52:47 +0000 (0:00:02.924) 0:00:25.576 ******** 2026-04-02 00:55:21.443394 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-02 00:55:21.443406 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:55:21.443440 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-02 00:55:21.443464 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:55:21.443477 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-02 00:55:21.443491 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:55:21.443503 | orchestrator | 2026-04-02 00:55:21.443516 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2026-04-02 00:55:21.443530 | orchestrator | Thursday 02 April 2026 00:52:50 +0000 (0:00:03.405) 0:00:28.982 ******** 2026-04-02 00:55:21.443558 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-02 00:55:21.443583 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-02 00:55:21.443612 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-02 00:55:21.443636 | orchestrator | 2026-04-02 00:55:21.443649 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-04-02 00:55:21.443661 | orchestrator | Thursday 02 April 2026 00:52:54 +0000 (0:00:03.781) 0:00:32.763 ******** 2026-04-02 00:55:21.443674 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:55:21.443687 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:55:21.443701 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:55:21.443714 | orchestrator | 2026-04-02 00:55:21.443726 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-04-02 00:55:21.443740 | orchestrator | Thursday 02 April 2026 00:52:55 +0000 (0:00:00.787) 0:00:33.551 ******** 2026-04-02 00:55:21.443752 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:55:21.443765 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:55:21.443777 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:55:21.443789 | orchestrator | 2026-04-02 00:55:21.443800 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-04-02 00:55:21.443814 | orchestrator | Thursday 02 April 2026 00:52:55 +0000 (0:00:00.310) 0:00:33.861 ******** 2026-04-02 00:55:21.443826 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:55:21.443838 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:55:21.443849 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:55:21.443861 | orchestrator | 2026-04-02 00:55:21.443874 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-04-02 00:55:21.443885 | orchestrator | Thursday 02 April 2026 00:52:56 +0000 (0:00:00.362) 0:00:34.223 ******** 2026-04-02 00:55:21.443898 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-04-02 00:55:21.443912 | orchestrator | ...ignoring 2026-04-02 00:55:21.443924 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-04-02 00:55:21.443937 | orchestrator | ...ignoring 2026-04-02 00:55:21.443948 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-04-02 00:55:21.443961 | orchestrator | ...ignoring 2026-04-02 00:55:21.443972 | orchestrator | 2026-04-02 00:55:21.443986 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-04-02 00:55:21.443998 | orchestrator | Thursday 02 April 2026 00:53:07 +0000 (0:00:11.200) 0:00:45.424 ******** 2026-04-02 00:55:21.444283 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:55:21.444301 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:55:21.444313 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:55:21.444325 | orchestrator | 2026-04-02 00:55:21.444336 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-04-02 00:55:21.444348 | orchestrator | Thursday 02 April 2026 00:53:07 +0000 (0:00:00.426) 0:00:45.850 ******** 2026-04-02 00:55:21.444374 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:55:21.444388 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:55:21.444399 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:55:21.444411 | orchestrator | 2026-04-02 00:55:21.444420 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-04-02 00:55:21.444431 | orchestrator | Thursday 02 April 2026 00:53:08 +0000 (0:00:00.392) 0:00:46.242 ******** 2026-04-02 00:55:21.444441 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:55:21.444451 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:55:21.444461 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:55:21.444471 | orchestrator | 2026-04-02 00:55:21.444481 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-04-02 00:55:21.444491 | orchestrator | Thursday 02 April 2026 00:53:08 +0000 (0:00:00.408) 0:00:46.651 ******** 2026-04-02 00:55:21.444503 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:55:21.444515 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:55:21.444527 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:55:21.444540 | orchestrator | 2026-04-02 00:55:21.444551 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-04-02 00:55:21.444563 | orchestrator | Thursday 02 April 2026 00:53:09 +0000 (0:00:00.624) 0:00:47.276 ******** 2026-04-02 00:55:21.444575 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:55:21.444587 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:55:21.444601 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:55:21.444612 | orchestrator | 2026-04-02 00:55:21.444624 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-04-02 00:55:21.444645 | orchestrator | Thursday 02 April 2026 00:53:09 +0000 (0:00:00.519) 0:00:47.795 ******** 2026-04-02 00:55:21.444671 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:55:21.444683 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:55:21.444695 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:55:21.444706 | orchestrator | 2026-04-02 00:55:21.444717 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-02 00:55:21.444729 | orchestrator | Thursday 02 April 2026 00:53:10 +0000 (0:00:00.432) 0:00:48.228 ******** 2026-04-02 00:55:21.444741 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:55:21.444753 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:55:21.444765 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-04-02 00:55:21.444777 | orchestrator | 2026-04-02 00:55:21.444791 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-04-02 00:55:21.444803 | orchestrator | Thursday 02 April 2026 00:53:10 +0000 (0:00:00.368) 0:00:48.596 ******** 2026-04-02 00:55:21.444815 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:55:21.444827 | orchestrator | 2026-04-02 00:55:21.444840 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-04-02 00:55:21.444854 | orchestrator | Thursday 02 April 2026 00:53:20 +0000 (0:00:10.212) 0:00:58.809 ******** 2026-04-02 00:55:21.444866 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:55:21.444878 | orchestrator | 2026-04-02 00:55:21.444887 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-02 00:55:21.444897 | orchestrator | Thursday 02 April 2026 00:53:20 +0000 (0:00:00.282) 0:00:59.092 ******** 2026-04-02 00:55:21.444907 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:55:21.444918 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:55:21.444928 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:55:21.444939 | orchestrator | 2026-04-02 00:55:21.444950 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-04-02 00:55:21.444963 | orchestrator | Thursday 02 April 2026 00:53:21 +0000 (0:00:00.778) 0:00:59.871 ******** 2026-04-02 00:55:21.444976 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:55:21.444988 | orchestrator | 2026-04-02 00:55:21.445000 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-04-02 00:55:21.445025 | orchestrator | Thursday 02 April 2026 00:53:29 +0000 (0:00:07.460) 0:01:07.331 ******** 2026-04-02 00:55:21.445038 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:55:21.445052 | orchestrator | 2026-04-02 00:55:21.445064 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-04-02 00:55:21.445076 | orchestrator | Thursday 02 April 2026 00:53:31 +0000 (0:00:01.861) 0:01:09.193 ******** 2026-04-02 00:55:21.445088 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:55:21.445123 | orchestrator | 2026-04-02 00:55:21.445136 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-04-02 00:55:21.445146 | orchestrator | Thursday 02 April 2026 00:53:33 +0000 (0:00:02.602) 0:01:11.796 ******** 2026-04-02 00:55:21.445156 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:55:21.445171 | orchestrator | 2026-04-02 00:55:21.445181 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-04-02 00:55:21.445193 | orchestrator | Thursday 02 April 2026 00:53:33 +0000 (0:00:00.124) 0:01:11.920 ******** 2026-04-02 00:55:21.445206 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:55:21.445217 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:55:21.445231 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:55:21.445243 | orchestrator | 2026-04-02 00:55:21.445255 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-04-02 00:55:21.445266 | orchestrator | Thursday 02 April 2026 00:53:34 +0000 (0:00:00.477) 0:01:12.398 ******** 2026-04-02 00:55:21.445275 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:55:21.445285 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:55:21.445294 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:55:21.445304 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-04-02 00:55:21.445314 | orchestrator | 2026-04-02 00:55:21.445324 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-04-02 00:55:21.445337 | orchestrator | skipping: no hosts matched 2026-04-02 00:55:21.445349 | orchestrator | 2026-04-02 00:55:21.445361 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-04-02 00:55:21.445374 | orchestrator | 2026-04-02 00:55:21.445387 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-04-02 00:55:21.445398 | orchestrator | Thursday 02 April 2026 00:53:34 +0000 (0:00:00.343) 0:01:12.741 ******** 2026-04-02 00:55:21.445412 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:55:21.445424 | orchestrator | 2026-04-02 00:55:21.445436 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-04-02 00:55:21.445447 | orchestrator | Thursday 02 April 2026 00:53:51 +0000 (0:00:17.098) 0:01:29.839 ******** 2026-04-02 00:55:21.445459 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:55:21.445471 | orchestrator | 2026-04-02 00:55:21.445482 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-04-02 00:55:21.445494 | orchestrator | Thursday 02 April 2026 00:54:07 +0000 (0:00:15.653) 0:01:45.493 ******** 2026-04-02 00:55:21.445506 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:55:21.445517 | orchestrator | 2026-04-02 00:55:21.445530 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-04-02 00:55:21.445542 | orchestrator | 2026-04-02 00:55:21.445551 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-04-02 00:55:21.445560 | orchestrator | Thursday 02 April 2026 00:54:09 +0000 (0:00:02.520) 0:01:48.014 ******** 2026-04-02 00:55:21.445569 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:55:21.445577 | orchestrator | 2026-04-02 00:55:21.445587 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-04-02 00:55:21.445597 | orchestrator | Thursday 02 April 2026 00:54:27 +0000 (0:00:17.711) 0:02:05.726 ******** 2026-04-02 00:55:21.445607 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:55:21.445618 | orchestrator | 2026-04-02 00:55:21.445628 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-04-02 00:55:21.445638 | orchestrator | Thursday 02 April 2026 00:54:43 +0000 (0:00:15.616) 0:02:21.342 ******** 2026-04-02 00:55:21.445662 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:55:21.445674 | orchestrator | 2026-04-02 00:55:21.445694 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-04-02 00:55:21.445704 | orchestrator | 2026-04-02 00:55:21.445722 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-04-02 00:55:21.445733 | orchestrator | Thursday 02 April 2026 00:54:45 +0000 (0:00:02.259) 0:02:23.602 ******** 2026-04-02 00:55:21.445743 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:55:21.445753 | orchestrator | 2026-04-02 00:55:21.445764 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-04-02 00:55:21.445773 | orchestrator | Thursday 02 April 2026 00:55:01 +0000 (0:00:16.135) 0:02:39.738 ******** 2026-04-02 00:55:21.445782 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:55:21.445792 | orchestrator | 2026-04-02 00:55:21.445803 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-04-02 00:55:21.445814 | orchestrator | Thursday 02 April 2026 00:55:02 +0000 (0:00:00.566) 0:02:40.304 ******** 2026-04-02 00:55:21.445826 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:55:21.445837 | orchestrator | 2026-04-02 00:55:21.445847 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-04-02 00:55:21.445857 | orchestrator | 2026-04-02 00:55:21.445868 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-04-02 00:55:21.445879 | orchestrator | Thursday 02 April 2026 00:55:04 +0000 (0:00:02.518) 0:02:42.822 ******** 2026-04-02 00:55:21.445889 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 00:55:21.445901 | orchestrator | 2026-04-02 00:55:21.445911 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-04-02 00:55:21.445922 | orchestrator | Thursday 02 April 2026 00:55:05 +0000 (0:00:00.621) 0:02:43.444 ******** 2026-04-02 00:55:21.445934 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:55:21.445944 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:55:21.445955 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:55:21.445966 | orchestrator | 2026-04-02 00:55:21.445977 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-04-02 00:55:21.445987 | orchestrator | Thursday 02 April 2026 00:55:07 +0000 (0:00:02.552) 0:02:45.996 ******** 2026-04-02 00:55:21.445998 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:55:21.446008 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:55:21.446057 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:55:21.446070 | orchestrator | 2026-04-02 00:55:21.446081 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-04-02 00:55:21.446095 | orchestrator | Thursday 02 April 2026 00:55:10 +0000 (0:00:02.434) 0:02:48.430 ******** 2026-04-02 00:55:21.446127 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:55:21.446139 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:55:21.446149 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:55:21.446160 | orchestrator | 2026-04-02 00:55:21.446172 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-04-02 00:55:21.446182 | orchestrator | Thursday 02 April 2026 00:55:12 +0000 (0:00:02.404) 0:02:50.834 ******** 2026-04-02 00:55:21.446194 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:55:21.446204 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:55:21.446216 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:55:21.446226 | orchestrator | 2026-04-02 00:55:21.446237 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-04-02 00:55:21.446247 | orchestrator | Thursday 02 April 2026 00:55:15 +0000 (0:00:02.605) 0:02:53.440 ******** 2026-04-02 00:55:21.446257 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:55:21.446267 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:55:21.446279 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:55:21.446289 | orchestrator | 2026-04-02 00:55:21.446299 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-04-02 00:55:21.446310 | orchestrator | Thursday 02 April 2026 00:55:18 +0000 (0:00:02.731) 0:02:56.172 ******** 2026-04-02 00:55:21.446333 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:55:21.446344 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:55:21.446355 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:55:21.446365 | orchestrator | 2026-04-02 00:55:21.446375 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-02 00:55:21.446389 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-04-02 00:55:21.446401 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2026-04-02 00:55:21.446414 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-04-02 00:55:21.446425 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-04-02 00:55:21.446437 | orchestrator | 2026-04-02 00:55:21.446448 | orchestrator | 2026-04-02 00:55:21.446459 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-02 00:55:21.446469 | orchestrator | Thursday 02 April 2026 00:55:18 +0000 (0:00:00.212) 0:02:56.384 ******** 2026-04-02 00:55:21.446479 | orchestrator | =============================================================================== 2026-04-02 00:55:21.446489 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 34.81s 2026-04-02 00:55:21.446501 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 31.27s 2026-04-02 00:55:21.446512 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 16.14s 2026-04-02 00:55:21.446522 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 11.20s 2026-04-02 00:55:21.446532 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.21s 2026-04-02 00:55:21.446544 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.46s 2026-04-02 00:55:21.446572 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 4.78s 2026-04-02 00:55:21.446584 | orchestrator | mariadb : Copying over config.json files for services ------------------- 4.08s 2026-04-02 00:55:21.446596 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.78s 2026-04-02 00:55:21.446606 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 3.77s 2026-04-02 00:55:21.446617 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 3.41s 2026-04-02 00:55:21.446629 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.02s 2026-04-02 00:55:21.446639 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.92s 2026-04-02 00:55:21.446650 | orchestrator | Check MariaDB service --------------------------------------------------- 2.91s 2026-04-02 00:55:21.446660 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.73s 2026-04-02 00:55:21.446670 | orchestrator | mariadb : Granting permissions on Mariabackup database to backup user --- 2.61s 2026-04-02 00:55:21.446680 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.60s 2026-04-02 00:55:21.446691 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 2.60s 2026-04-02 00:55:21.446701 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.55s 2026-04-02 00:55:21.446712 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.52s 2026-04-02 00:55:21.446724 | orchestrator | 2026-04-02 00:55:21 | INFO  | Task 81f56938-c114-43e6-bcf1-ee86e0091ca3 is in state STARTED 2026-04-02 00:55:21.446734 | orchestrator | 2026-04-02 00:55:21 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:55:24.490493 | orchestrator | 2026-04-02 00:55:24 | INFO  | Task e2536911-a8b3-41de-af4b-9cacb64efd71 is in state STARTED 2026-04-02 00:55:24.491673 | orchestrator | 2026-04-02 00:55:24 | INFO  | Task bb29f833-7bd7-4614-8dfe-8d9119582ba0 is in state STARTED 2026-04-02 00:55:24.493854 | orchestrator | 2026-04-02 00:55:24 | INFO  | Task 81f56938-c114-43e6-bcf1-ee86e0091ca3 is in state STARTED 2026-04-02 00:55:24.493905 | orchestrator | 2026-04-02 00:55:24 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:55:27.531412 | orchestrator | 2026-04-02 00:55:27 | INFO  | Task e2536911-a8b3-41de-af4b-9cacb64efd71 is in state STARTED 2026-04-02 00:55:27.532195 | orchestrator | 2026-04-02 00:55:27 | INFO  | Task bb29f833-7bd7-4614-8dfe-8d9119582ba0 is in state STARTED 2026-04-02 00:55:27.533293 | orchestrator | 2026-04-02 00:55:27 | INFO  | Task 81f56938-c114-43e6-bcf1-ee86e0091ca3 is in state STARTED 2026-04-02 00:55:27.533338 | orchestrator | 2026-04-02 00:55:27 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:55:30.564178 | orchestrator | 2026-04-02 00:55:30 | INFO  | Task e2536911-a8b3-41de-af4b-9cacb64efd71 is in state STARTED 2026-04-02 00:55:30.567072 | orchestrator | 2026-04-02 00:55:30 | INFO  | Task bb29f833-7bd7-4614-8dfe-8d9119582ba0 is in state STARTED 2026-04-02 00:55:30.570658 | orchestrator | 2026-04-02 00:55:30 | INFO  | Task 81f56938-c114-43e6-bcf1-ee86e0091ca3 is in state STARTED 2026-04-02 00:55:30.570799 | orchestrator | 2026-04-02 00:55:30 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:55:33.616947 | orchestrator | 2026-04-02 00:55:33 | INFO  | Task e2536911-a8b3-41de-af4b-9cacb64efd71 is in state STARTED 2026-04-02 00:55:33.619306 | orchestrator | 2026-04-02 00:55:33 | INFO  | Task bb29f833-7bd7-4614-8dfe-8d9119582ba0 is in state STARTED 2026-04-02 00:55:33.622618 | orchestrator | 2026-04-02 00:55:33 | INFO  | Task 81f56938-c114-43e6-bcf1-ee86e0091ca3 is in state STARTED 2026-04-02 00:55:33.622706 | orchestrator | 2026-04-02 00:55:33 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:55:36.656060 | orchestrator | 2026-04-02 00:55:36 | INFO  | Task e2536911-a8b3-41de-af4b-9cacb64efd71 is in state STARTED 2026-04-02 00:55:36.656813 | orchestrator | 2026-04-02 00:55:36 | INFO  | Task bb29f833-7bd7-4614-8dfe-8d9119582ba0 is in state STARTED 2026-04-02 00:55:36.657888 | orchestrator | 2026-04-02 00:55:36 | INFO  | Task 81f56938-c114-43e6-bcf1-ee86e0091ca3 is in state STARTED 2026-04-02 00:55:36.657918 | orchestrator | 2026-04-02 00:55:36 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:55:39.702657 | orchestrator | 2026-04-02 00:55:39 | INFO  | Task e2536911-a8b3-41de-af4b-9cacb64efd71 is in state STARTED 2026-04-02 00:55:39.704307 | orchestrator | 2026-04-02 00:55:39 | INFO  | Task bb29f833-7bd7-4614-8dfe-8d9119582ba0 is in state STARTED 2026-04-02 00:55:39.705453 | orchestrator | 2026-04-02 00:55:39 | INFO  | Task 81f56938-c114-43e6-bcf1-ee86e0091ca3 is in state STARTED 2026-04-02 00:55:39.705486 | orchestrator | 2026-04-02 00:55:39 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:55:42.744419 | orchestrator | 2026-04-02 00:55:42 | INFO  | Task e2536911-a8b3-41de-af4b-9cacb64efd71 is in state STARTED 2026-04-02 00:55:42.744932 | orchestrator | 2026-04-02 00:55:42 | INFO  | Task bb29f833-7bd7-4614-8dfe-8d9119582ba0 is in state STARTED 2026-04-02 00:55:42.746678 | orchestrator | 2026-04-02 00:55:42 | INFO  | Task 81f56938-c114-43e6-bcf1-ee86e0091ca3 is in state STARTED 2026-04-02 00:55:42.746817 | orchestrator | 2026-04-02 00:55:42 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:55:45.785557 | orchestrator | 2026-04-02 00:55:45 | INFO  | Task e2536911-a8b3-41de-af4b-9cacb64efd71 is in state STARTED 2026-04-02 00:55:45.787355 | orchestrator | 2026-04-02 00:55:45 | INFO  | Task bb29f833-7bd7-4614-8dfe-8d9119582ba0 is in state STARTED 2026-04-02 00:55:45.793586 | orchestrator | 2026-04-02 00:55:45 | INFO  | Task 81f56938-c114-43e6-bcf1-ee86e0091ca3 is in state STARTED 2026-04-02 00:55:45.793678 | orchestrator | 2026-04-02 00:55:45 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:55:48.835935 | orchestrator | 2026-04-02 00:55:48 | INFO  | Task e2536911-a8b3-41de-af4b-9cacb64efd71 is in state STARTED 2026-04-02 00:55:48.838261 | orchestrator | 2026-04-02 00:55:48 | INFO  | Task bb29f833-7bd7-4614-8dfe-8d9119582ba0 is in state STARTED 2026-04-02 00:55:48.840336 | orchestrator | 2026-04-02 00:55:48 | INFO  | Task 81f56938-c114-43e6-bcf1-ee86e0091ca3 is in state STARTED 2026-04-02 00:55:48.840411 | orchestrator | 2026-04-02 00:55:48 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:55:51.886879 | orchestrator | 2026-04-02 00:55:51 | INFO  | Task e2536911-a8b3-41de-af4b-9cacb64efd71 is in state STARTED 2026-04-02 00:55:51.886937 | orchestrator | 2026-04-02 00:55:51 | INFO  | Task bb29f833-7bd7-4614-8dfe-8d9119582ba0 is in state STARTED 2026-04-02 00:55:51.888810 | orchestrator | 2026-04-02 00:55:51 | INFO  | Task 81f56938-c114-43e6-bcf1-ee86e0091ca3 is in state STARTED 2026-04-02 00:55:51.888861 | orchestrator | 2026-04-02 00:55:51 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:55:54.926529 | orchestrator | 2026-04-02 00:55:54 | INFO  | Task e2536911-a8b3-41de-af4b-9cacb64efd71 is in state STARTED 2026-04-02 00:55:54.929075 | orchestrator | 2026-04-02 00:55:54 | INFO  | Task bb29f833-7bd7-4614-8dfe-8d9119582ba0 is in state STARTED 2026-04-02 00:55:54.933748 | orchestrator | 2026-04-02 00:55:54 | INFO  | Task 81f56938-c114-43e6-bcf1-ee86e0091ca3 is in state SUCCESS 2026-04-02 00:55:54.934275 | orchestrator | 2026-04-02 00:55:54 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:55:54.935547 | orchestrator | 2026-04-02 00:55:54.935602 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-02 00:55:54.935612 | orchestrator | 2.16.14 2026-04-02 00:55:54.935620 | orchestrator | 2026-04-02 00:55:54.935627 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-04-02 00:55:54.935633 | orchestrator | 2026-04-02 00:55:54.935637 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-02 00:55:54.935643 | orchestrator | Thursday 02 April 2026 00:54:04 +0000 (0:00:00.534) 0:00:00.534 ******** 2026-04-02 00:55:54.935647 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-02 00:55:54.935653 | orchestrator | 2026-04-02 00:55:54.935660 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-02 00:55:54.935665 | orchestrator | Thursday 02 April 2026 00:54:04 +0000 (0:00:00.598) 0:00:01.132 ******** 2026-04-02 00:55:54.935671 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:55:54.935678 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:55:54.935684 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:55:54.935690 | orchestrator | 2026-04-02 00:55:54.935696 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-02 00:55:54.935703 | orchestrator | Thursday 02 April 2026 00:54:05 +0000 (0:00:01.007) 0:00:02.140 ******** 2026-04-02 00:55:54.935709 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:55:54.935716 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:55:54.935722 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:55:54.935728 | orchestrator | 2026-04-02 00:55:54.935735 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-02 00:55:54.935741 | orchestrator | Thursday 02 April 2026 00:54:06 +0000 (0:00:00.287) 0:00:02.428 ******** 2026-04-02 00:55:54.935748 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:55:54.935754 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:55:54.935785 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:55:54.935792 | orchestrator | 2026-04-02 00:55:54.935799 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-02 00:55:54.935806 | orchestrator | Thursday 02 April 2026 00:54:06 +0000 (0:00:00.793) 0:00:03.221 ******** 2026-04-02 00:55:54.935812 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:55:54.935875 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:55:54.936168 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:55:54.936186 | orchestrator | 2026-04-02 00:55:54.936193 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-02 00:55:54.936199 | orchestrator | Thursday 02 April 2026 00:54:07 +0000 (0:00:00.296) 0:00:03.518 ******** 2026-04-02 00:55:54.936205 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:55:54.936211 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:55:54.936217 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:55:54.936224 | orchestrator | 2026-04-02 00:55:54.936230 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-02 00:55:54.936251 | orchestrator | Thursday 02 April 2026 00:54:07 +0000 (0:00:00.291) 0:00:03.809 ******** 2026-04-02 00:55:54.936256 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:55:54.936262 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:55:54.936267 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:55:54.936273 | orchestrator | 2026-04-02 00:55:54.936279 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-02 00:55:54.936285 | orchestrator | Thursday 02 April 2026 00:54:07 +0000 (0:00:00.316) 0:00:04.126 ******** 2026-04-02 00:55:54.936290 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:55:54.936298 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:55:54.936303 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:55:54.936309 | orchestrator | 2026-04-02 00:55:54.936315 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-02 00:55:54.936322 | orchestrator | Thursday 02 April 2026 00:54:08 +0000 (0:00:00.478) 0:00:04.605 ******** 2026-04-02 00:55:54.936328 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:55:54.936334 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:55:54.936339 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:55:54.936345 | orchestrator | 2026-04-02 00:55:54.936351 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-02 00:55:54.936357 | orchestrator | Thursday 02 April 2026 00:54:08 +0000 (0:00:00.272) 0:00:04.878 ******** 2026-04-02 00:55:54.936364 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-02 00:55:54.936370 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-02 00:55:54.936376 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-02 00:55:54.936382 | orchestrator | 2026-04-02 00:55:54.936388 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-02 00:55:54.936393 | orchestrator | Thursday 02 April 2026 00:54:09 +0000 (0:00:00.645) 0:00:05.523 ******** 2026-04-02 00:55:54.936399 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:55:54.936405 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:55:54.936411 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:55:54.936417 | orchestrator | 2026-04-02 00:55:54.936423 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-02 00:55:54.936429 | orchestrator | Thursday 02 April 2026 00:54:09 +0000 (0:00:00.436) 0:00:05.960 ******** 2026-04-02 00:55:54.936435 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-02 00:55:54.936441 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-02 00:55:54.936661 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-02 00:55:54.936677 | orchestrator | 2026-04-02 00:55:54.936684 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-02 00:55:54.936690 | orchestrator | Thursday 02 April 2026 00:54:12 +0000 (0:00:03.011) 0:00:08.972 ******** 2026-04-02 00:55:54.936711 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-02 00:55:54.936718 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-02 00:55:54.936724 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-02 00:55:54.936730 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:55:54.936736 | orchestrator | 2026-04-02 00:55:54.936769 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-02 00:55:54.936776 | orchestrator | Thursday 02 April 2026 00:54:13 +0000 (0:00:00.395) 0:00:09.367 ******** 2026-04-02 00:55:54.936784 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-02 00:55:54.936792 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-02 00:55:54.936798 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-02 00:55:54.936804 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:55:54.936809 | orchestrator | 2026-04-02 00:55:54.936815 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-02 00:55:54.936822 | orchestrator | Thursday 02 April 2026 00:54:13 +0000 (0:00:00.784) 0:00:10.152 ******** 2026-04-02 00:55:54.936830 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-02 00:55:54.936846 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-02 00:55:54.936854 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-02 00:55:54.936860 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:55:54.936867 | orchestrator | 2026-04-02 00:55:54.936873 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-02 00:55:54.936880 | orchestrator | Thursday 02 April 2026 00:54:14 +0000 (0:00:00.158) 0:00:10.310 ******** 2026-04-02 00:55:54.936888 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '23e11e95b6bd', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-02 00:54:10.695727', 'end': '2026-04-02 00:54:10.727938', 'delta': '0:00:00.032211', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['23e11e95b6bd'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-02 00:55:54.936905 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '64f3307d8d6a', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-02 00:54:11.712749', 'end': '2026-04-02 00:54:11.746991', 'delta': '0:00:00.034242', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['64f3307d8d6a'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-02 00:55:54.936933 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '0b9419eb739e', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-02 00:54:12.555666', 'end': '2026-04-02 00:54:12.599592', 'delta': '0:00:00.043926', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['0b9419eb739e'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-02 00:55:54.936941 | orchestrator | 2026-04-02 00:55:54.936947 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-02 00:55:54.936953 | orchestrator | Thursday 02 April 2026 00:54:14 +0000 (0:00:00.357) 0:00:10.667 ******** 2026-04-02 00:55:54.936959 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:55:54.936964 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:55:54.936971 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:55:54.936978 | orchestrator | 2026-04-02 00:55:54.936984 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-02 00:55:54.936990 | orchestrator | Thursday 02 April 2026 00:54:14 +0000 (0:00:00.428) 0:00:11.096 ******** 2026-04-02 00:55:54.936995 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-04-02 00:55:54.937000 | orchestrator | 2026-04-02 00:55:54.937005 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-02 00:55:54.937011 | orchestrator | Thursday 02 April 2026 00:54:16 +0000 (0:00:01.385) 0:00:12.482 ******** 2026-04-02 00:55:54.937017 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:55:54.937023 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:55:54.937029 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:55:54.937035 | orchestrator | 2026-04-02 00:55:54.937040 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-02 00:55:54.937046 | orchestrator | Thursday 02 April 2026 00:54:16 +0000 (0:00:00.293) 0:00:12.776 ******** 2026-04-02 00:55:54.937051 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:55:54.937057 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:55:54.937062 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:55:54.937069 | orchestrator | 2026-04-02 00:55:54.937076 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-02 00:55:54.937109 | orchestrator | Thursday 02 April 2026 00:54:16 +0000 (0:00:00.399) 0:00:13.176 ******** 2026-04-02 00:55:54.937117 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:55:54.937123 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:55:54.937128 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:55:54.937134 | orchestrator | 2026-04-02 00:55:54.937140 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-02 00:55:54.937145 | orchestrator | Thursday 02 April 2026 00:54:17 +0000 (0:00:00.474) 0:00:13.650 ******** 2026-04-02 00:55:54.937161 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:55:54.937167 | orchestrator | 2026-04-02 00:55:54.937172 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-02 00:55:54.937177 | orchestrator | Thursday 02 April 2026 00:54:17 +0000 (0:00:00.119) 0:00:13.770 ******** 2026-04-02 00:55:54.937187 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:55:54.937200 | orchestrator | 2026-04-02 00:55:54.937214 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-02 00:55:54.937223 | orchestrator | Thursday 02 April 2026 00:54:17 +0000 (0:00:00.218) 0:00:13.988 ******** 2026-04-02 00:55:54.937229 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:55:54.937234 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:55:54.937240 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:55:54.937246 | orchestrator | 2026-04-02 00:55:54.937252 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-02 00:55:54.937257 | orchestrator | Thursday 02 April 2026 00:54:18 +0000 (0:00:00.270) 0:00:14.259 ******** 2026-04-02 00:55:54.937263 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:55:54.937270 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:55:54.937276 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:55:54.937281 | orchestrator | 2026-04-02 00:55:54.937288 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-02 00:55:54.937294 | orchestrator | Thursday 02 April 2026 00:54:18 +0000 (0:00:00.305) 0:00:14.564 ******** 2026-04-02 00:55:54.937301 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:55:54.937307 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:55:54.937313 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:55:54.937319 | orchestrator | 2026-04-02 00:55:54.937325 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-02 00:55:54.937330 | orchestrator | Thursday 02 April 2026 00:54:18 +0000 (0:00:00.464) 0:00:15.029 ******** 2026-04-02 00:55:54.937336 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:55:54.937341 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:55:54.937346 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:55:54.937352 | orchestrator | 2026-04-02 00:55:54.937358 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-02 00:55:54.937366 | orchestrator | Thursday 02 April 2026 00:54:19 +0000 (0:00:00.277) 0:00:15.307 ******** 2026-04-02 00:55:54.937371 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:55:54.937378 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:55:54.937386 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:55:54.937394 | orchestrator | 2026-04-02 00:55:54.937401 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-02 00:55:54.937408 | orchestrator | Thursday 02 April 2026 00:54:19 +0000 (0:00:00.270) 0:00:15.577 ******** 2026-04-02 00:55:54.937415 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:55:54.937422 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:55:54.937428 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:55:54.937462 | orchestrator | 2026-04-02 00:55:54.937471 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-02 00:55:54.937478 | orchestrator | Thursday 02 April 2026 00:54:19 +0000 (0:00:00.268) 0:00:15.846 ******** 2026-04-02 00:55:54.937484 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:55:54.937489 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:55:54.937495 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:55:54.937502 | orchestrator | 2026-04-02 00:55:54.937508 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-02 00:55:54.937513 | orchestrator | Thursday 02 April 2026 00:54:20 +0000 (0:00:00.385) 0:00:16.232 ******** 2026-04-02 00:55:54.937521 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3f9aa46c--6044--534e--8fed--f8e8e1b6cabb-osd--block--3f9aa46c--6044--534e--8fed--f8e8e1b6cabb', 'dm-uuid-LVM-1MTXoGF8o53qkDTSPtxC3aThD3vdY9e755qwrpVQd1mUdwCow4Ywk178cgvEkFc3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-02 00:55:54.937536 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c3a3e1f2--53da--5696--b7a3--d36d02964763-osd--block--c3a3e1f2--53da--5696--b7a3--d36d02964763', 'dm-uuid-LVM-hTQpxbX1AFLcmtQHNUWfcNukVXxPFxHQ3EK7GjgLiZfCRXz108x0HJCxENks5HKf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-02 00:55:54.937550 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-02 00:55:54.937559 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-02 00:55:54.937566 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-02 00:55:54.937572 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-02 00:55:54.937578 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-02 00:55:54.937602 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-02 00:55:54.937610 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--88a5a1a0--9236--5c9d--8025--e39ec03fb505-osd--block--88a5a1a0--9236--5c9d--8025--e39ec03fb505', 'dm-uuid-LVM-WMI3nYyBf7h4a35UZ2BgnO9vqcyxNCNvD7goC0THY81DlzhSjoy8A79FmmjpRb1D'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-02 00:55:54.937630 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-02 00:55:54.937638 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b27c5b00--4597--5124--934a--fd641c3feb65-osd--block--b27c5b00--4597--5124--934a--fd641c3feb65', 'dm-uuid-LVM-h3QAXOvc3sBuPMb0fptvx6xk5sLFRoS4xCd5UbEvm5kMw6J5pD02ABDp4W7c0Nb3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-02 00:55:54.937649 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-02 00:55:54.937656 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-02 00:55:54.937663 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-02 00:55:54.937692 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f06db598-1059-4957-87c8-4c1fce10345d', 'scsi-SQEMU_QEMU_HARDDISK_f06db598-1059-4957-87c8-4c1fce10345d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f06db598-1059-4957-87c8-4c1fce10345d-part1', 'scsi-SQEMU_QEMU_HARDDISK_f06db598-1059-4957-87c8-4c1fce10345d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f06db598-1059-4957-87c8-4c1fce10345d-part14', 'scsi-SQEMU_QEMU_HARDDISK_f06db598-1059-4957-87c8-4c1fce10345d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f06db598-1059-4957-87c8-4c1fce10345d-part15', 'scsi-SQEMU_QEMU_HARDDISK_f06db598-1059-4957-87c8-4c1fce10345d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f06db598-1059-4957-87c8-4c1fce10345d-part16', 'scsi-SQEMU_QEMU_HARDDISK_f06db598-1059-4957-87c8-4c1fce10345d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-02 00:55:54.937708 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-02 00:55:54.937720 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--3f9aa46c--6044--534e--8fed--f8e8e1b6cabb-osd--block--3f9aa46c--6044--534e--8fed--f8e8e1b6cabb'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-VZf4A8-Vl5Y-RfGE-02Wv-400i-5pCQ-Pd3NQz', 'scsi-0QEMU_QEMU_HARDDISK_2d38c850-3a2f-4695-a83c-0cf43f012ceb', 'scsi-SQEMU_QEMU_HARDDISK_2d38c850-3a2f-4695-a83c-0cf43f012ceb'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-02 00:55:54.937729 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-02 00:55:54.937736 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--c3a3e1f2--53da--5696--b7a3--d36d02964763-osd--block--c3a3e1f2--53da--5696--b7a3--d36d02964763'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-77F7uO-Apmc-H24C-qSBW-Epdk-PXaJ-z2vjIe', 'scsi-0QEMU_QEMU_HARDDISK_a19da191-4981-42d2-9779-658e739bce45', 'scsi-SQEMU_QEMU_HARDDISK_a19da191-4981-42d2-9779-658e739bce45'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-02 00:55:54.937743 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-02 00:55:54.937770 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-02 00:55:54.937784 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e9e694a9-9d82-484d-8c29-c125fbbe1161', 'scsi-SQEMU_QEMU_HARDDISK_e9e694a9-9d82-484d-8c29-c125fbbe1161'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-02 00:55:54.937793 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-02 00:55:54.937800 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-02-00-02-53-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-02 00:55:54.937810 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-02 00:55:54.937817 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:55:54.937830 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0032d447-591d-4b7c-93ad-b7b900e6d05d', 'scsi-SQEMU_QEMU_HARDDISK_0032d447-591d-4b7c-93ad-b7b900e6d05d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0032d447-591d-4b7c-93ad-b7b900e6d05d-part1', 'scsi-SQEMU_QEMU_HARDDISK_0032d447-591d-4b7c-93ad-b7b900e6d05d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0032d447-591d-4b7c-93ad-b7b900e6d05d-part14', 'scsi-SQEMU_QEMU_HARDDISK_0032d447-591d-4b7c-93ad-b7b900e6d05d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0032d447-591d-4b7c-93ad-b7b900e6d05d-part15', 'scsi-SQEMU_QEMU_HARDDISK_0032d447-591d-4b7c-93ad-b7b900e6d05d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0032d447-591d-4b7c-93ad-b7b900e6d05d-part16', 'scsi-SQEMU_QEMU_HARDDISK_0032d447-591d-4b7c-93ad-b7b900e6d05d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-02 00:55:54.937844 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--88a5a1a0--9236--5c9d--8025--e39ec03fb505-osd--block--88a5a1a0--9236--5c9d--8025--e39ec03fb505'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-cpchoi-yKuF-8aYI-GwtW-pIyk-I2rQ-1zQAUq', 'scsi-0QEMU_QEMU_HARDDISK_9b931cae-7a06-4f63-bca1-6514ca0f11b4', 'scsi-SQEMU_QEMU_HARDDISK_9b931cae-7a06-4f63-bca1-6514ca0f11b4'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-02 00:55:54.937853 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--b27c5b00--4597--5124--934a--fd641c3feb65-osd--block--b27c5b00--4597--5124--934a--fd641c3feb65'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-0MrAeq-GoNi-4r0V-mMLh-NwwX-eFcS-sZLAKx', 'scsi-0QEMU_QEMU_HARDDISK_5e29c7a5-f411-44d3-9f54-46e8ba073aaf', 'scsi-SQEMU_QEMU_HARDDISK_5e29c7a5-f411-44d3-9f54-46e8ba073aaf'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-02 00:55:54.937864 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_70d9d3d1-3bd9-44af-99af-3d2d8f65a3c3', 'scsi-SQEMU_QEMU_HARDDISK_70d9d3d1-3bd9-44af-99af-3d2d8f65a3c3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-02 00:55:54.937873 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-02-00-02-58-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-02 00:55:54.937880 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:55:54.937886 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ce3dc94c--dd22--5089--bd64--d73b3d29d8ba-osd--block--ce3dc94c--dd22--5089--bd64--d73b3d29d8ba', 'dm-uuid-LVM-G2WDd4XiPx9HORZHRtE3mgDAzOB6fs6NM2nHDmnmtMFr3pKoNoRhNZj7lvGLYpvi'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-02 00:55:54.937901 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--bc329f0f--76ef--5b6a--a482--1349b51ce957-osd--block--bc329f0f--76ef--5b6a--a482--1349b51ce957', 'dm-uuid-LVM-sgOQhAC0fLYGMkqYeHhJJ16yOte7p9OEs0xJMNVB78tsOVrhNOvcp9WHftWoN43H'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-02 00:55:54.937914 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-02 00:55:54.937922 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-02 00:55:54.937930 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-02 00:55:54.937941 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-02 00:55:54.937949 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-02 00:55:54.937957 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-02 00:55:54.937963 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-02 00:55:54.937970 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-02 00:55:54.937982 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_213be582-731a-41c2-8309-28c1726af439', 'scsi-SQEMU_QEMU_HARDDISK_213be582-731a-41c2-8309-28c1726af439'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_213be582-731a-41c2-8309-28c1726af439-part1', 'scsi-SQEMU_QEMU_HARDDISK_213be582-731a-41c2-8309-28c1726af439-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_213be582-731a-41c2-8309-28c1726af439-part14', 'scsi-SQEMU_QEMU_HARDDISK_213be582-731a-41c2-8309-28c1726af439-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_213be582-731a-41c2-8309-28c1726af439-part15', 'scsi-SQEMU_QEMU_HARDDISK_213be582-731a-41c2-8309-28c1726af439-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_213be582-731a-41c2-8309-28c1726af439-part16', 'scsi-SQEMU_QEMU_HARDDISK_213be582-731a-41c2-8309-28c1726af439-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-02 00:55:54.938002 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--ce3dc94c--dd22--5089--bd64--d73b3d29d8ba-osd--block--ce3dc94c--dd22--5089--bd64--d73b3d29d8ba'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ytxEle-KSA8-0usH-5AQL-iyHO-U5AI-R1BFiA', 'scsi-0QEMU_QEMU_HARDDISK_3e41a6ee-2963-4f7f-bd44-4fc104801a1a', 'scsi-SQEMU_QEMU_HARDDISK_3e41a6ee-2963-4f7f-bd44-4fc104801a1a'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-02 00:55:54.938011 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--bc329f0f--76ef--5b6a--a482--1349b51ce957-osd--block--bc329f0f--76ef--5b6a--a482--1349b51ce957'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-tqEXWj-CJSC-viSt-InK1-u1DN-lqDx-dUOwBK', 'scsi-0QEMU_QEMU_HARDDISK_313a743e-b82e-49a6-b933-92b7a7e896a9', 'scsi-SQEMU_QEMU_HARDDISK_313a743e-b82e-49a6-b933-92b7a7e896a9'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-02 00:55:54.938188 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_01ed39e8-1eff-44a7-98b4-951368397b21', 'scsi-SQEMU_QEMU_HARDDISK_01ed39e8-1eff-44a7-98b4-951368397b21'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-02 00:55:54.938228 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-02-00-02-56-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-02 00:55:54.938234 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:55:54.938242 | orchestrator | 2026-04-02 00:55:54.938256 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-02 00:55:54.938263 | orchestrator | Thursday 02 April 2026 00:54:20 +0000 (0:00:00.656) 0:00:16.888 ******** 2026-04-02 00:55:54.938271 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3f9aa46c--6044--534e--8fed--f8e8e1b6cabb-osd--block--3f9aa46c--6044--534e--8fed--f8e8e1b6cabb', 'dm-uuid-LVM-1MTXoGF8o53qkDTSPtxC3aThD3vdY9e755qwrpVQd1mUdwCow4Ywk178cgvEkFc3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:55:54.938282 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c3a3e1f2--53da--5696--b7a3--d36d02964763-osd--block--c3a3e1f2--53da--5696--b7a3--d36d02964763', 'dm-uuid-LVM-hTQpxbX1AFLcmtQHNUWfcNukVXxPFxHQ3EK7GjgLiZfCRXz108x0HJCxENks5HKf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:55:54.938290 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:55:54.938297 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:55:54.938308 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:55:54.938323 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:55:54.938329 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:55:54.938335 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:55:54.938348 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:55:54.938356 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:55:54.938369 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f06db598-1059-4957-87c8-4c1fce10345d', 'scsi-SQEMU_QEMU_HARDDISK_f06db598-1059-4957-87c8-4c1fce10345d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f06db598-1059-4957-87c8-4c1fce10345d-part1', 'scsi-SQEMU_QEMU_HARDDISK_f06db598-1059-4957-87c8-4c1fce10345d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f06db598-1059-4957-87c8-4c1fce10345d-part14', 'scsi-SQEMU_QEMU_HARDDISK_f06db598-1059-4957-87c8-4c1fce10345d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f06db598-1059-4957-87c8-4c1fce10345d-part15', 'scsi-SQEMU_QEMU_HARDDISK_f06db598-1059-4957-87c8-4c1fce10345d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f06db598-1059-4957-87c8-4c1fce10345d-part16', 'scsi-SQEMU_QEMU_HARDDISK_f06db598-1059-4957-87c8-4c1fce10345d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:55:54.938399 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--3f9aa46c--6044--534e--8fed--f8e8e1b6cabb-osd--block--3f9aa46c--6044--534e--8fed--f8e8e1b6cabb'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-VZf4A8-Vl5Y-RfGE-02Wv-400i-5pCQ-Pd3NQz', 'scsi-0QEMU_QEMU_HARDDISK_2d38c850-3a2f-4695-a83c-0cf43f012ceb', 'scsi-SQEMU_QEMU_HARDDISK_2d38c850-3a2f-4695-a83c-0cf43f012ceb'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:55:54.938418 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--c3a3e1f2--53da--5696--b7a3--d36d02964763-osd--block--c3a3e1f2--53da--5696--b7a3--d36d02964763'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-77F7uO-Apmc-H24C-qSBW-Epdk-PXaJ-z2vjIe', 'scsi-0QEMU_QEMU_HARDDISK_a19da191-4981-42d2-9779-658e739bce45', 'scsi-SQEMU_QEMU_HARDDISK_a19da191-4981-42d2-9779-658e739bce45'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:55:54.938435 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--88a5a1a0--9236--5c9d--8025--e39ec03fb505-osd--block--88a5a1a0--9236--5c9d--8025--e39ec03fb505', 'dm-uuid-LVM-WMI3nYyBf7h4a35UZ2BgnO9vqcyxNCNvD7goC0THY81DlzhSjoy8A79FmmjpRb1D'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:55:54.938469 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e9e694a9-9d82-484d-8c29-c125fbbe1161', 'scsi-SQEMU_QEMU_HARDDISK_e9e694a9-9d82-484d-8c29-c125fbbe1161'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:55:54.938486 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b27c5b00--4597--5124--934a--fd641c3feb65-osd--block--b27c5b00--4597--5124--934a--fd641c3feb65', 'dm-uuid-LVM-h3QAXOvc3sBuPMb0fptvx6xk5sLFRoS4xCd5UbEvm5kMw6J5pD02ABDp4W7c0Nb3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:55:54.938508 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-02-00-02-53-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:55:54.938520 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:55:54.938527 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:55:54.938533 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:55:54.938546 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:55:54.938558 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:55:54.938573 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:55:54.938588 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:55:54.938611 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ce3dc94c--dd22--5089--bd64--d73b3d29d8ba-osd--block--ce3dc94c--dd22--5089--bd64--d73b3d29d8ba', 'dm-uuid-LVM-G2WDd4XiPx9HORZHRtE3mgDAzOB6fs6NM2nHDmnmtMFr3pKoNoRhNZj7lvGLYpvi'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:55:54.938626 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:55:54.938650 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--bc329f0f--76ef--5b6a--a482--1349b51ce957-osd--block--bc329f0f--76ef--5b6a--a482--1349b51ce957', 'dm-uuid-LVM-sgOQhAC0fLYGMkqYeHhJJ16yOte7p9OEs0xJMNVB78tsOVrhNOvcp9WHftWoN43H'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:55:54.938685 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:55:54.938703 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:55:54.938716 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:55:54.938729 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0032d447-591d-4b7c-93ad-b7b900e6d05d', 'scsi-SQEMU_QEMU_HARDDISK_0032d447-591d-4b7c-93ad-b7b900e6d05d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0032d447-591d-4b7c-93ad-b7b900e6d05d-part1', 'scsi-SQEMU_QEMU_HARDDISK_0032d447-591d-4b7c-93ad-b7b900e6d05d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0032d447-591d-4b7c-93ad-b7b900e6d05d-part14', 'scsi-SQEMU_QEMU_HARDDISK_0032d447-591d-4b7c-93ad-b7b900e6d05d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0032d447-591d-4b7c-93ad-b7b900e6d05d-part15', 'scsi-SQEMU_QEMU_HARDDISK_0032d447-591d-4b7c-93ad-b7b900e6d05d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0032d447-591d-4b7c-93ad-b7b900e6d05d-part16', 'scsi-SQEMU_QEMU_HARDDISK_0032d447-591d-4b7c-93ad-b7b900e6d05d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:55:54.938746 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:55:54.938753 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--88a5a1a0--9236--5c9d--8025--e39ec03fb505-osd--block--88a5a1a0--9236--5c9d--8025--e39ec03fb505'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-cpchoi-yKuF-8aYI-GwtW-pIyk-I2rQ-1zQAUq', 'scsi-0QEMU_QEMU_HARDDISK_9b931cae-7a06-4f63-bca1-6514ca0f11b4', 'scsi-SQEMU_QEMU_HARDDISK_9b931cae-7a06-4f63-bca1-6514ca0f11b4'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:55:54.938762 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:55:54.938769 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--b27c5b00--4597--5124--934a--fd641c3feb65-osd--block--b27c5b00--4597--5124--934a--fd641c3feb65'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-0MrAeq-GoNi-4r0V-mMLh-NwwX-eFcS-sZLAKx', 'scsi-0QEMU_QEMU_HARDDISK_5e29c7a5-f411-44d3-9f54-46e8ba073aaf', 'scsi-SQEMU_QEMU_HARDDISK_5e29c7a5-f411-44d3-9f54-46e8ba073aaf'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:55:54.938781 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:55:54.938792 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_70d9d3d1-3bd9-44af-99af-3d2d8f65a3c3', 'scsi-SQEMU_QEMU_HARDDISK_70d9d3d1-3bd9-44af-99af-3d2d8f65a3c3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:55:54.938799 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:55:54.938806 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-02-00-02-58-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:55:54.938816 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:55:54.938823 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:55:54.938834 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:55:54.938844 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_213be582-731a-41c2-8309-28c1726af439', 'scsi-SQEMU_QEMU_HARDDISK_213be582-731a-41c2-8309-28c1726af439'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_213be582-731a-41c2-8309-28c1726af439-part1', 'scsi-SQEMU_QEMU_HARDDISK_213be582-731a-41c2-8309-28c1726af439-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_213be582-731a-41c2-8309-28c1726af439-part14', 'scsi-SQEMU_QEMU_HARDDISK_213be582-731a-41c2-8309-28c1726af439-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_213be582-731a-41c2-8309-28c1726af439-part15', 'scsi-SQEMU_QEMU_HARDDISK_213be582-731a-41c2-8309-28c1726af439-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_213be582-731a-41c2-8309-28c1726af439-part16', 'scsi-SQEMU_QEMU_HARDDISK_213be582-731a-41c2-8309-28c1726af439-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:55:54.938855 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--ce3dc94c--dd22--5089--bd64--d73b3d29d8ba-osd--block--ce3dc94c--dd22--5089--bd64--d73b3d29d8ba'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ytxEle-KSA8-0usH-5AQL-iyHO-U5AI-R1BFiA', 'scsi-0QEMU_QEMU_HARDDISK_3e41a6ee-2963-4f7f-bd44-4fc104801a1a', 'scsi-SQEMU_QEMU_HARDDISK_3e41a6ee-2963-4f7f-bd44-4fc104801a1a'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:55:54.938863 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--bc329f0f--76ef--5b6a--a482--1349b51ce957-osd--block--bc329f0f--76ef--5b6a--a482--1349b51ce957'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-tqEXWj-CJSC-viSt-InK1-u1DN-lqDx-dUOwBK', 'scsi-0QEMU_QEMU_HARDDISK_313a743e-b82e-49a6-b933-92b7a7e896a9', 'scsi-SQEMU_QEMU_HARDDISK_313a743e-b82e-49a6-b933-92b7a7e896a9'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:55:54.938878 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_01ed39e8-1eff-44a7-98b4-951368397b21', 'scsi-SQEMU_QEMU_HARDDISK_01ed39e8-1eff-44a7-98b4-951368397b21'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:55:54.938889 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-02-00-02-56-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-02 00:55:54.938895 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:55:54.938901 | orchestrator | 2026-04-02 00:55:54.938907 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-02 00:55:54.938913 | orchestrator | Thursday 02 April 2026 00:54:21 +0000 (0:00:00.506) 0:00:17.395 ******** 2026-04-02 00:55:54.938919 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:55:54.938925 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:55:54.938932 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:55:54.938937 | orchestrator | 2026-04-02 00:55:54.938943 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-02 00:55:54.938948 | orchestrator | Thursday 02 April 2026 00:54:21 +0000 (0:00:00.651) 0:00:18.046 ******** 2026-04-02 00:55:54.938954 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:55:54.938960 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:55:54.938966 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:55:54.938971 | orchestrator | 2026-04-02 00:55:54.938977 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-02 00:55:54.938982 | orchestrator | Thursday 02 April 2026 00:54:22 +0000 (0:00:00.377) 0:00:18.424 ******** 2026-04-02 00:55:54.938989 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:55:54.938995 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:55:54.939000 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:55:54.939005 | orchestrator | 2026-04-02 00:55:54.939011 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-02 00:55:54.939017 | orchestrator | Thursday 02 April 2026 00:54:22 +0000 (0:00:00.627) 0:00:19.052 ******** 2026-04-02 00:55:54.939027 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:55:54.939033 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:55:54.939039 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:55:54.939044 | orchestrator | 2026-04-02 00:55:54.939050 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-02 00:55:54.939057 | orchestrator | Thursday 02 April 2026 00:54:23 +0000 (0:00:00.269) 0:00:19.321 ******** 2026-04-02 00:55:54.939064 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:55:54.939070 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:55:54.939078 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:55:54.939190 | orchestrator | 2026-04-02 00:55:54.939205 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-02 00:55:54.939218 | orchestrator | Thursday 02 April 2026 00:54:23 +0000 (0:00:00.371) 0:00:19.693 ******** 2026-04-02 00:55:54.939224 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:55:54.939231 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:55:54.939237 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:55:54.939243 | orchestrator | 2026-04-02 00:55:54.939249 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-02 00:55:54.939255 | orchestrator | Thursday 02 April 2026 00:54:23 +0000 (0:00:00.487) 0:00:20.180 ******** 2026-04-02 00:55:54.939261 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-04-02 00:55:54.939268 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-04-02 00:55:54.939273 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-04-02 00:55:54.939279 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-04-02 00:55:54.939285 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-04-02 00:55:54.939292 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-04-02 00:55:54.939298 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-04-02 00:55:54.939304 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-04-02 00:55:54.939311 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-04-02 00:55:54.939316 | orchestrator | 2026-04-02 00:55:54.939322 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-02 00:55:54.939328 | orchestrator | Thursday 02 April 2026 00:54:24 +0000 (0:00:00.820) 0:00:21.001 ******** 2026-04-02 00:55:54.939335 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-02 00:55:54.939342 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-02 00:55:54.939347 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-02 00:55:54.939353 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:55:54.939359 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-02 00:55:54.939365 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-02 00:55:54.939371 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-02 00:55:54.939379 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:55:54.939385 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-02 00:55:54.939391 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-02 00:55:54.939398 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-02 00:55:54.939404 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:55:54.939410 | orchestrator | 2026-04-02 00:55:54.939417 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-02 00:55:54.939424 | orchestrator | Thursday 02 April 2026 00:54:25 +0000 (0:00:00.372) 0:00:21.373 ******** 2026-04-02 00:55:54.939431 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-02 00:55:54.939437 | orchestrator | 2026-04-02 00:55:54.939444 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-02 00:55:54.939452 | orchestrator | Thursday 02 April 2026 00:54:25 +0000 (0:00:00.700) 0:00:22.074 ******** 2026-04-02 00:55:54.939476 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:55:54.939483 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:55:54.939490 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:55:54.939497 | orchestrator | 2026-04-02 00:55:54.939503 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-02 00:55:54.939510 | orchestrator | Thursday 02 April 2026 00:54:26 +0000 (0:00:00.357) 0:00:22.431 ******** 2026-04-02 00:55:54.939516 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:55:54.939522 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:55:54.939529 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:55:54.939535 | orchestrator | 2026-04-02 00:55:54.939541 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-02 00:55:54.939548 | orchestrator | Thursday 02 April 2026 00:54:26 +0000 (0:00:00.296) 0:00:22.728 ******** 2026-04-02 00:55:54.939554 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:55:54.939560 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:55:54.939567 | orchestrator | skipping: [testbed-node-5] 2026-04-02 00:55:54.939573 | orchestrator | 2026-04-02 00:55:54.939579 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-02 00:55:54.939586 | orchestrator | Thursday 02 April 2026 00:54:26 +0000 (0:00:00.323) 0:00:23.051 ******** 2026-04-02 00:55:54.939592 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:55:54.939598 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:55:54.939605 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:55:54.939611 | orchestrator | 2026-04-02 00:55:54.939617 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-02 00:55:54.939624 | orchestrator | Thursday 02 April 2026 00:54:27 +0000 (0:00:00.566) 0:00:23.618 ******** 2026-04-02 00:55:54.939630 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-02 00:55:54.939636 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-02 00:55:54.939642 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-02 00:55:54.939649 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:55:54.939655 | orchestrator | 2026-04-02 00:55:54.939661 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-02 00:55:54.939668 | orchestrator | Thursday 02 April 2026 00:54:27 +0000 (0:00:00.380) 0:00:23.998 ******** 2026-04-02 00:55:54.939674 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-02 00:55:54.939680 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-02 00:55:54.939687 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-02 00:55:54.939693 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:55:54.939700 | orchestrator | 2026-04-02 00:55:54.939706 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-02 00:55:54.939712 | orchestrator | Thursday 02 April 2026 00:54:28 +0000 (0:00:00.369) 0:00:24.367 ******** 2026-04-02 00:55:54.939719 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-02 00:55:54.939726 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-02 00:55:54.939731 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-02 00:55:54.939737 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:55:54.939743 | orchestrator | 2026-04-02 00:55:54.939749 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-02 00:55:54.939755 | orchestrator | Thursday 02 April 2026 00:54:28 +0000 (0:00:00.376) 0:00:24.744 ******** 2026-04-02 00:55:54.939762 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:55:54.939768 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:55:54.939774 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:55:54.939780 | orchestrator | 2026-04-02 00:55:54.939787 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-02 00:55:54.939793 | orchestrator | Thursday 02 April 2026 00:54:28 +0000 (0:00:00.303) 0:00:25.048 ******** 2026-04-02 00:55:54.939800 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-02 00:55:54.939813 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-02 00:55:54.939819 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-02 00:55:54.939826 | orchestrator | 2026-04-02 00:55:54.939832 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-02 00:55:54.939838 | orchestrator | Thursday 02 April 2026 00:54:29 +0000 (0:00:00.494) 0:00:25.542 ******** 2026-04-02 00:55:54.939845 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-02 00:55:54.939851 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-02 00:55:54.939857 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-02 00:55:54.939861 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-02 00:55:54.939865 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-02 00:55:54.939869 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-02 00:55:54.939873 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-02 00:55:54.939876 | orchestrator | 2026-04-02 00:55:54.939880 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-02 00:55:54.939884 | orchestrator | Thursday 02 April 2026 00:54:30 +0000 (0:00:00.984) 0:00:26.527 ******** 2026-04-02 00:55:54.939888 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-02 00:55:54.939892 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-02 00:55:54.939896 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-02 00:55:54.939899 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-02 00:55:54.939903 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-02 00:55:54.939907 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-02 00:55:54.939916 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-02 00:55:54.939920 | orchestrator | 2026-04-02 00:55:54.939924 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-04-02 00:55:54.939928 | orchestrator | Thursday 02 April 2026 00:54:32 +0000 (0:00:01.896) 0:00:28.423 ******** 2026-04-02 00:55:54.939932 | orchestrator | skipping: [testbed-node-3] 2026-04-02 00:55:54.939935 | orchestrator | skipping: [testbed-node-4] 2026-04-02 00:55:54.939939 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-04-02 00:55:54.939943 | orchestrator | 2026-04-02 00:55:54.939947 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-04-02 00:55:54.939951 | orchestrator | Thursday 02 April 2026 00:54:32 +0000 (0:00:00.373) 0:00:28.796 ******** 2026-04-02 00:55:54.939955 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-02 00:55:54.939961 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-02 00:55:54.939965 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-02 00:55:54.939969 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-02 00:55:54.939980 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-02 00:55:54.939984 | orchestrator | 2026-04-02 00:55:54.939987 | orchestrator | TASK [generate keys] *********************************************************** 2026-04-02 00:55:54.939991 | orchestrator | Thursday 02 April 2026 00:55:08 +0000 (0:00:35.574) 0:01:04.370 ******** 2026-04-02 00:55:54.939995 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-02 00:55:54.939999 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-02 00:55:54.940003 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-02 00:55:54.940007 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-02 00:55:54.940010 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-02 00:55:54.940014 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-02 00:55:54.940018 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-04-02 00:55:54.940022 | orchestrator | 2026-04-02 00:55:54.940025 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-04-02 00:55:54.940029 | orchestrator | Thursday 02 April 2026 00:55:26 +0000 (0:00:18.409) 0:01:22.780 ******** 2026-04-02 00:55:54.940033 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-02 00:55:54.940037 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-02 00:55:54.940041 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-02 00:55:54.940045 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-02 00:55:54.940048 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-02 00:55:54.940052 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-02 00:55:54.940056 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-02 00:55:54.940060 | orchestrator | 2026-04-02 00:55:54.940064 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-04-02 00:55:54.940068 | orchestrator | Thursday 02 April 2026 00:55:36 +0000 (0:00:09.633) 0:01:32.413 ******** 2026-04-02 00:55:54.940072 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-02 00:55:54.940075 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-02 00:55:54.940079 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-02 00:55:54.940096 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-02 00:55:54.940103 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-02 00:55:54.940113 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-02 00:55:54.940119 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-02 00:55:54.940125 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-02 00:55:54.940131 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-02 00:55:54.940137 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-02 00:55:54.940143 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-02 00:55:54.940154 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-02 00:55:54.940157 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-02 00:55:54.940162 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-02 00:55:54.940166 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-02 00:55:54.940169 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-02 00:55:54.940173 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-02 00:55:54.940177 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-02 00:55:54.940181 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-04-02 00:55:54.940185 | orchestrator | 2026-04-02 00:55:54.940188 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-02 00:55:54.940192 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-04-02 00:55:54.940198 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-04-02 00:55:54.940202 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-04-02 00:55:54.940206 | orchestrator | 2026-04-02 00:55:54.940210 | orchestrator | 2026-04-02 00:55:54.940214 | orchestrator | 2026-04-02 00:55:54.940217 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-02 00:55:54.940221 | orchestrator | Thursday 02 April 2026 00:55:53 +0000 (0:00:17.334) 0:01:49.748 ******** 2026-04-02 00:55:54.940228 | orchestrator | =============================================================================== 2026-04-02 00:55:54.940232 | orchestrator | create openstack pool(s) ----------------------------------------------- 35.57s 2026-04-02 00:55:54.940236 | orchestrator | generate keys ---------------------------------------------------------- 18.41s 2026-04-02 00:55:54.940240 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.33s 2026-04-02 00:55:54.940244 | orchestrator | get keys from monitors -------------------------------------------------- 9.63s 2026-04-02 00:55:54.940247 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 3.01s 2026-04-02 00:55:54.940251 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.90s 2026-04-02 00:55:54.940255 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.39s 2026-04-02 00:55:54.940259 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 1.01s 2026-04-02 00:55:54.940262 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.98s 2026-04-02 00:55:54.940266 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.82s 2026-04-02 00:55:54.940270 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.79s 2026-04-02 00:55:54.940274 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.78s 2026-04-02 00:55:54.940278 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.70s 2026-04-02 00:55:54.940282 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.66s 2026-04-02 00:55:54.940287 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.65s 2026-04-02 00:55:54.940293 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.65s 2026-04-02 00:55:54.940299 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.63s 2026-04-02 00:55:54.940305 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.60s 2026-04-02 00:55:54.940310 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.57s 2026-04-02 00:55:54.940317 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.51s 2026-04-02 00:55:57.974907 | orchestrator | 2026-04-02 00:55:57 | INFO  | Task e2536911-a8b3-41de-af4b-9cacb64efd71 is in state STARTED 2026-04-02 00:55:57.976002 | orchestrator | 2026-04-02 00:55:57 | INFO  | Task bb29f833-7bd7-4614-8dfe-8d9119582ba0 is in state STARTED 2026-04-02 00:55:57.977281 | orchestrator | 2026-04-02 00:55:57 | INFO  | Task 4a68ea66-3cf8-4801-8176-4e5ccc0a3cd4 is in state STARTED 2026-04-02 00:55:57.977319 | orchestrator | 2026-04-02 00:55:57 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:56:01.024779 | orchestrator | 2026-04-02 00:56:01 | INFO  | Task e2536911-a8b3-41de-af4b-9cacb64efd71 is in state STARTED 2026-04-02 00:56:01.025319 | orchestrator | 2026-04-02 00:56:01 | INFO  | Task bb29f833-7bd7-4614-8dfe-8d9119582ba0 is in state STARTED 2026-04-02 00:56:01.026518 | orchestrator | 2026-04-02 00:56:01 | INFO  | Task 4a68ea66-3cf8-4801-8176-4e5ccc0a3cd4 is in state STARTED 2026-04-02 00:56:01.026546 | orchestrator | 2026-04-02 00:56:01 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:56:04.082450 | orchestrator | 2026-04-02 00:56:04 | INFO  | Task e2536911-a8b3-41de-af4b-9cacb64efd71 is in state STARTED 2026-04-02 00:56:04.084135 | orchestrator | 2026-04-02 00:56:04 | INFO  | Task bb29f833-7bd7-4614-8dfe-8d9119582ba0 is in state STARTED 2026-04-02 00:56:04.085358 | orchestrator | 2026-04-02 00:56:04 | INFO  | Task 4a68ea66-3cf8-4801-8176-4e5ccc0a3cd4 is in state STARTED 2026-04-02 00:56:04.085401 | orchestrator | 2026-04-02 00:56:04 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:56:07.164232 | orchestrator | 2026-04-02 00:56:07 | INFO  | Task e2536911-a8b3-41de-af4b-9cacb64efd71 is in state STARTED 2026-04-02 00:56:07.164412 | orchestrator | 2026-04-02 00:56:07 | INFO  | Task bb29f833-7bd7-4614-8dfe-8d9119582ba0 is in state STARTED 2026-04-02 00:56:07.170544 | orchestrator | 2026-04-02 00:56:07 | INFO  | Task 4a68ea66-3cf8-4801-8176-4e5ccc0a3cd4 is in state STARTED 2026-04-02 00:56:07.170624 | orchestrator | 2026-04-02 00:56:07 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:56:10.224550 | orchestrator | 2026-04-02 00:56:10 | INFO  | Task e2536911-a8b3-41de-af4b-9cacb64efd71 is in state STARTED 2026-04-02 00:56:10.224641 | orchestrator | 2026-04-02 00:56:10 | INFO  | Task bb29f833-7bd7-4614-8dfe-8d9119582ba0 is in state STARTED 2026-04-02 00:56:10.225368 | orchestrator | 2026-04-02 00:56:10 | INFO  | Task 4a68ea66-3cf8-4801-8176-4e5ccc0a3cd4 is in state STARTED 2026-04-02 00:56:10.225517 | orchestrator | 2026-04-02 00:56:10 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:56:13.314908 | orchestrator | 2026-04-02 00:56:13 | INFO  | Task e2536911-a8b3-41de-af4b-9cacb64efd71 is in state STARTED 2026-04-02 00:56:13.315523 | orchestrator | 2026-04-02 00:56:13 | INFO  | Task bb29f833-7bd7-4614-8dfe-8d9119582ba0 is in state STARTED 2026-04-02 00:56:13.318252 | orchestrator | 2026-04-02 00:56:13 | INFO  | Task 4a68ea66-3cf8-4801-8176-4e5ccc0a3cd4 is in state STARTED 2026-04-02 00:56:13.320577 | orchestrator | 2026-04-02 00:56:13 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:56:16.365014 | orchestrator | 2026-04-02 00:56:16 | INFO  | Task e2536911-a8b3-41de-af4b-9cacb64efd71 is in state STARTED 2026-04-02 00:56:16.367236 | orchestrator | 2026-04-02 00:56:16 | INFO  | Task bb29f833-7bd7-4614-8dfe-8d9119582ba0 is in state STARTED 2026-04-02 00:56:16.369261 | orchestrator | 2026-04-02 00:56:16 | INFO  | Task 4a68ea66-3cf8-4801-8176-4e5ccc0a3cd4 is in state STARTED 2026-04-02 00:56:16.369453 | orchestrator | 2026-04-02 00:56:16 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:56:19.428972 | orchestrator | 2026-04-02 00:56:19 | INFO  | Task e2536911-a8b3-41de-af4b-9cacb64efd71 is in state STARTED 2026-04-02 00:56:19.430263 | orchestrator | 2026-04-02 00:56:19 | INFO  | Task bb29f833-7bd7-4614-8dfe-8d9119582ba0 is in state STARTED 2026-04-02 00:56:19.432439 | orchestrator | 2026-04-02 00:56:19 | INFO  | Task 4a68ea66-3cf8-4801-8176-4e5ccc0a3cd4 is in state STARTED 2026-04-02 00:56:19.432581 | orchestrator | 2026-04-02 00:56:19 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:56:22.492118 | orchestrator | 2026-04-02 00:56:22 | INFO  | Task e2536911-a8b3-41de-af4b-9cacb64efd71 is in state STARTED 2026-04-02 00:56:22.492207 | orchestrator | 2026-04-02 00:56:22 | INFO  | Task bb29f833-7bd7-4614-8dfe-8d9119582ba0 is in state STARTED 2026-04-02 00:56:22.493206 | orchestrator | 2026-04-02 00:56:22 | INFO  | Task 4a68ea66-3cf8-4801-8176-4e5ccc0a3cd4 is in state STARTED 2026-04-02 00:56:22.493248 | orchestrator | 2026-04-02 00:56:22 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:56:25.541412 | orchestrator | 2026-04-02 00:56:25 | INFO  | Task e2536911-a8b3-41de-af4b-9cacb64efd71 is in state STARTED 2026-04-02 00:56:25.542756 | orchestrator | 2026-04-02 00:56:25 | INFO  | Task bb29f833-7bd7-4614-8dfe-8d9119582ba0 is in state STARTED 2026-04-02 00:56:25.544797 | orchestrator | 2026-04-02 00:56:25 | INFO  | Task 4a68ea66-3cf8-4801-8176-4e5ccc0a3cd4 is in state STARTED 2026-04-02 00:56:25.544861 | orchestrator | 2026-04-02 00:56:25 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:56:28.590250 | orchestrator | 2026-04-02 00:56:28 | INFO  | Task e2536911-a8b3-41de-af4b-9cacb64efd71 is in state STARTED 2026-04-02 00:56:28.591011 | orchestrator | 2026-04-02 00:56:28 | INFO  | Task bb29f833-7bd7-4614-8dfe-8d9119582ba0 is in state STARTED 2026-04-02 00:56:28.593103 | orchestrator | 2026-04-02 00:56:28 | INFO  | Task 4a68ea66-3cf8-4801-8176-4e5ccc0a3cd4 is in state STARTED 2026-04-02 00:56:28.593142 | orchestrator | 2026-04-02 00:56:28 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:56:31.637143 | orchestrator | 2026-04-02 00:56:31 | INFO  | Task e2536911-a8b3-41de-af4b-9cacb64efd71 is in state STARTED 2026-04-02 00:56:31.639698 | orchestrator | 2026-04-02 00:56:31 | INFO  | Task bb29f833-7bd7-4614-8dfe-8d9119582ba0 is in state STARTED 2026-04-02 00:56:31.640961 | orchestrator | 2026-04-02 00:56:31 | INFO  | Task 4a68ea66-3cf8-4801-8176-4e5ccc0a3cd4 is in state STARTED 2026-04-02 00:56:31.640993 | orchestrator | 2026-04-02 00:56:31 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:56:34.695836 | orchestrator | 2026-04-02 00:56:34 | INFO  | Task e2536911-a8b3-41de-af4b-9cacb64efd71 is in state STARTED 2026-04-02 00:56:34.697182 | orchestrator | 2026-04-02 00:56:34 | INFO  | Task bb29f833-7bd7-4614-8dfe-8d9119582ba0 is in state STARTED 2026-04-02 00:56:34.698973 | orchestrator | 2026-04-02 00:56:34 | INFO  | Task 84d0bb8e-f32e-438d-b9da-2dec8f7f7d2e is in state STARTED 2026-04-02 00:56:34.700694 | orchestrator | 2026-04-02 00:56:34 | INFO  | Task 4a68ea66-3cf8-4801-8176-4e5ccc0a3cd4 is in state SUCCESS 2026-04-02 00:56:34.701038 | orchestrator | 2026-04-02 00:56:34 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:56:37.751387 | orchestrator | 2026-04-02 00:56:37 | INFO  | Task e2536911-a8b3-41de-af4b-9cacb64efd71 is in state STARTED 2026-04-02 00:56:37.752757 | orchestrator | 2026-04-02 00:56:37 | INFO  | Task bb29f833-7bd7-4614-8dfe-8d9119582ba0 is in state STARTED 2026-04-02 00:56:37.754924 | orchestrator | 2026-04-02 00:56:37 | INFO  | Task 84d0bb8e-f32e-438d-b9da-2dec8f7f7d2e is in state STARTED 2026-04-02 00:56:37.756045 | orchestrator | 2026-04-02 00:56:37 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:56:40.798309 | orchestrator | 2026-04-02 00:56:40 | INFO  | Task e2536911-a8b3-41de-af4b-9cacb64efd71 is in state STARTED 2026-04-02 00:56:40.801468 | orchestrator | 2026-04-02 00:56:40 | INFO  | Task bb29f833-7bd7-4614-8dfe-8d9119582ba0 is in state STARTED 2026-04-02 00:56:40.803441 | orchestrator | 2026-04-02 00:56:40 | INFO  | Task 84d0bb8e-f32e-438d-b9da-2dec8f7f7d2e is in state STARTED 2026-04-02 00:56:40.803618 | orchestrator | 2026-04-02 00:56:40 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:56:43.854774 | orchestrator | 2026-04-02 00:56:43 | INFO  | Task e2536911-a8b3-41de-af4b-9cacb64efd71 is in state STARTED 2026-04-02 00:56:43.855706 | orchestrator | 2026-04-02 00:56:43 | INFO  | Task bb29f833-7bd7-4614-8dfe-8d9119582ba0 is in state STARTED 2026-04-02 00:56:43.857173 | orchestrator | 2026-04-02 00:56:43 | INFO  | Task 84d0bb8e-f32e-438d-b9da-2dec8f7f7d2e is in state STARTED 2026-04-02 00:56:43.857221 | orchestrator | 2026-04-02 00:56:43 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:56:46.903349 | orchestrator | 2026-04-02 00:56:46 | INFO  | Task e2536911-a8b3-41de-af4b-9cacb64efd71 is in state STARTED 2026-04-02 00:56:46.904414 | orchestrator | 2026-04-02 00:56:46 | INFO  | Task bb29f833-7bd7-4614-8dfe-8d9119582ba0 is in state STARTED 2026-04-02 00:56:46.906641 | orchestrator | 2026-04-02 00:56:46 | INFO  | Task 84d0bb8e-f32e-438d-b9da-2dec8f7f7d2e is in state STARTED 2026-04-02 00:56:46.906711 | orchestrator | 2026-04-02 00:56:46 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:56:49.946753 | orchestrator | 2026-04-02 00:56:49 | INFO  | Task e2536911-a8b3-41de-af4b-9cacb64efd71 is in state STARTED 2026-04-02 00:56:49.947074 | orchestrator | 2026-04-02 00:56:49 | INFO  | Task bb29f833-7bd7-4614-8dfe-8d9119582ba0 is in state STARTED 2026-04-02 00:56:49.948221 | orchestrator | 2026-04-02 00:56:49 | INFO  | Task 84d0bb8e-f32e-438d-b9da-2dec8f7f7d2e is in state STARTED 2026-04-02 00:56:49.948278 | orchestrator | 2026-04-02 00:56:49 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:56:52.997434 | orchestrator | 2026-04-02 00:56:52 | INFO  | Task e2536911-a8b3-41de-af4b-9cacb64efd71 is in state STARTED 2026-04-02 00:56:53.000870 | orchestrator | 2026-04-02 00:56:53 | INFO  | Task bb29f833-7bd7-4614-8dfe-8d9119582ba0 is in state STARTED 2026-04-02 00:56:53.003640 | orchestrator | 2026-04-02 00:56:53 | INFO  | Task 84d0bb8e-f32e-438d-b9da-2dec8f7f7d2e is in state STARTED 2026-04-02 00:56:53.003692 | orchestrator | 2026-04-02 00:56:53 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:56:56.118288 | orchestrator | 2026-04-02 00:56:56 | INFO  | Task e2536911-a8b3-41de-af4b-9cacb64efd71 is in state SUCCESS 2026-04-02 00:56:56.119250 | orchestrator | 2026-04-02 00:56:56.119405 | orchestrator | 2026-04-02 00:56:56.119417 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-04-02 00:56:56.119424 | orchestrator | 2026-04-02 00:56:56.119432 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-04-02 00:56:56.119439 | orchestrator | Thursday 02 April 2026 00:55:57 +0000 (0:00:00.252) 0:00:00.252 ******** 2026-04-02 00:56:56.119446 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-04-02 00:56:56.119454 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-02 00:56:56.119461 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-02 00:56:56.119467 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-04-02 00:56:56.119474 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-02 00:56:56.119501 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-04-02 00:56:56.119508 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-04-02 00:56:56.119514 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-04-02 00:56:56.119521 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-04-02 00:56:56.119527 | orchestrator | 2026-04-02 00:56:56.119533 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-04-02 00:56:56.119539 | orchestrator | Thursday 02 April 2026 00:56:01 +0000 (0:00:04.530) 0:00:04.782 ******** 2026-04-02 00:56:56.119545 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-04-02 00:56:56.119552 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-02 00:56:56.119557 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-02 00:56:56.119576 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-04-02 00:56:56.119583 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-02 00:56:56.119589 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-04-02 00:56:56.119595 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-04-02 00:56:56.119601 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-04-02 00:56:56.119607 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-04-02 00:56:56.119613 | orchestrator | 2026-04-02 00:56:56.119619 | orchestrator | TASK [Create share directory] ************************************************** 2026-04-02 00:56:56.119626 | orchestrator | Thursday 02 April 2026 00:56:06 +0000 (0:00:04.476) 0:00:09.258 ******** 2026-04-02 00:56:56.119632 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-02 00:56:56.119808 | orchestrator | 2026-04-02 00:56:56.119820 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-04-02 00:56:56.119831 | orchestrator | Thursday 02 April 2026 00:56:07 +0000 (0:00:01.095) 0:00:10.354 ******** 2026-04-02 00:56:56.119841 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-04-02 00:56:56.119852 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-04-02 00:56:56.119863 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-04-02 00:56:56.119873 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-04-02 00:56:56.119883 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-04-02 00:56:56.119890 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-04-02 00:56:56.119896 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-04-02 00:56:56.119902 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-04-02 00:56:56.119908 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-04-02 00:56:56.119915 | orchestrator | 2026-04-02 00:56:56.119921 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-04-02 00:56:56.119927 | orchestrator | Thursday 02 April 2026 00:56:21 +0000 (0:00:14.125) 0:00:24.479 ******** 2026-04-02 00:56:56.119934 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-04-02 00:56:56.119940 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-04-02 00:56:56.119961 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-04-02 00:56:56.119972 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-04-02 00:56:56.119996 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-04-02 00:56:56.120008 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-04-02 00:56:56.120018 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-04-02 00:56:56.120029 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-04-02 00:56:56.120039 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-04-02 00:56:56.120069 | orchestrator | 2026-04-02 00:56:56.120080 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-04-02 00:56:56.120089 | orchestrator | Thursday 02 April 2026 00:56:24 +0000 (0:00:03.468) 0:00:27.947 ******** 2026-04-02 00:56:56.120096 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-04-02 00:56:56.120102 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-04-02 00:56:56.120108 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-04-02 00:56:56.120115 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-04-02 00:56:56.120121 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-04-02 00:56:56.120127 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-04-02 00:56:56.120133 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-04-02 00:56:56.120139 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-04-02 00:56:56.120145 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-04-02 00:56:56.120152 | orchestrator | 2026-04-02 00:56:56.120158 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-02 00:56:56.120164 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-02 00:56:56.120171 | orchestrator | 2026-04-02 00:56:56.120178 | orchestrator | 2026-04-02 00:56:56.120184 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-02 00:56:56.120190 | orchestrator | Thursday 02 April 2026 00:56:31 +0000 (0:00:06.812) 0:00:34.760 ******** 2026-04-02 00:56:56.120202 | orchestrator | =============================================================================== 2026-04-02 00:56:56.120209 | orchestrator | Write ceph keys to the share directory --------------------------------- 14.13s 2026-04-02 00:56:56.120215 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.81s 2026-04-02 00:56:56.120221 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.53s 2026-04-02 00:56:56.120227 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.48s 2026-04-02 00:56:56.120233 | orchestrator | Check if target directories exist --------------------------------------- 3.47s 2026-04-02 00:56:56.120239 | orchestrator | Create share directory -------------------------------------------------- 1.10s 2026-04-02 00:56:56.120245 | orchestrator | 2026-04-02 00:56:56.120251 | orchestrator | 2026-04-02 00:56:56.120258 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-02 00:56:56.120264 | orchestrator | 2026-04-02 00:56:56.120270 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-02 00:56:56.120276 | orchestrator | Thursday 02 April 2026 00:55:21 +0000 (0:00:00.222) 0:00:00.222 ******** 2026-04-02 00:56:56.120282 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:56:56.120289 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:56:56.120302 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:56:56.120308 | orchestrator | 2026-04-02 00:56:56.120314 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-02 00:56:56.120320 | orchestrator | Thursday 02 April 2026 00:55:21 +0000 (0:00:00.251) 0:00:00.473 ******** 2026-04-02 00:56:56.120327 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-04-02 00:56:56.120333 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-04-02 00:56:56.120339 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-04-02 00:56:56.120346 | orchestrator | 2026-04-02 00:56:56.120352 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-04-02 00:56:56.120358 | orchestrator | 2026-04-02 00:56:56.120364 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-02 00:56:56.120371 | orchestrator | Thursday 02 April 2026 00:55:21 +0000 (0:00:00.240) 0:00:00.714 ******** 2026-04-02 00:56:56.120377 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 00:56:56.120383 | orchestrator | 2026-04-02 00:56:56.120389 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-04-02 00:56:56.120396 | orchestrator | Thursday 02 April 2026 00:55:22 +0000 (0:00:00.489) 0:00:01.203 ******** 2026-04-02 00:56:56.120418 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-02 00:56:56.120434 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-02 00:56:56.120589 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-02 00:56:56.120599 | orchestrator | 2026-04-02 00:56:56.120607 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-04-02 00:56:56.120614 | orchestrator | Thursday 02 April 2026 00:55:23 +0000 (0:00:01.301) 0:00:02.505 ******** 2026-04-02 00:56:56.120622 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:56:56.120635 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:56:56.120643 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:56:56.120650 | orchestrator | 2026-04-02 00:56:56.120657 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-02 00:56:56.120665 | orchestrator | Thursday 02 April 2026 00:55:23 +0000 (0:00:00.243) 0:00:02.749 ******** 2026-04-02 00:56:56.120672 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-02 00:56:56.120679 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-02 00:56:56.120687 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-04-02 00:56:56.120694 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-04-02 00:56:56.120701 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-04-02 00:56:56.120708 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-04-02 00:56:56.120715 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-04-02 00:56:56.120721 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-04-02 00:56:56.120727 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-02 00:56:56.120733 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-02 00:56:56.120740 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-04-02 00:56:56.120746 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-04-02 00:56:56.120752 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-04-02 00:56:56.120758 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-04-02 00:56:56.120764 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-04-02 00:56:56.120770 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-04-02 00:56:56.120776 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-02 00:56:56.120782 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-02 00:56:56.120788 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-04-02 00:56:56.120794 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-04-02 00:56:56.120800 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-04-02 00:56:56.120807 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-04-02 00:56:56.120817 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-04-02 00:56:56.120827 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-04-02 00:56:56.120839 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-04-02 00:56:56.120851 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-04-02 00:56:56.120861 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-04-02 00:56:56.120871 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-04-02 00:56:56.120881 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-04-02 00:56:56.120899 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-04-02 00:56:56.120909 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-04-02 00:56:56.120919 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-04-02 00:56:56.120930 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-04-02 00:56:56.120940 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-04-02 00:56:56.120951 | orchestrator | 2026-04-02 00:56:56.120961 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-02 00:56:56.120972 | orchestrator | Thursday 02 April 2026 00:55:24 +0000 (0:00:00.616) 0:00:03.365 ******** 2026-04-02 00:56:56.120979 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:56:56.120985 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:56:56.120992 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:56:56.120998 | orchestrator | 2026-04-02 00:56:56.121004 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-02 00:56:56.121010 | orchestrator | Thursday 02 April 2026 00:55:24 +0000 (0:00:00.343) 0:00:03.709 ******** 2026-04-02 00:56:56.121016 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:56:56.121023 | orchestrator | 2026-04-02 00:56:56.121029 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-02 00:56:56.121035 | orchestrator | Thursday 02 April 2026 00:55:24 +0000 (0:00:00.104) 0:00:03.814 ******** 2026-04-02 00:56:56.121041 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:56:56.121102 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:56:56.121109 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:56:56.121115 | orchestrator | 2026-04-02 00:56:56.121122 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-02 00:56:56.121128 | orchestrator | Thursday 02 April 2026 00:55:25 +0000 (0:00:00.223) 0:00:04.038 ******** 2026-04-02 00:56:56.121134 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:56:56.121140 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:56:56.121146 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:56:56.121153 | orchestrator | 2026-04-02 00:56:56.121159 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-02 00:56:56.121165 | orchestrator | Thursday 02 April 2026 00:55:25 +0000 (0:00:00.242) 0:00:04.280 ******** 2026-04-02 00:56:56.121171 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:56:56.121177 | orchestrator | 2026-04-02 00:56:56.121183 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-02 00:56:56.121189 | orchestrator | Thursday 02 April 2026 00:55:25 +0000 (0:00:00.112) 0:00:04.393 ******** 2026-04-02 00:56:56.121196 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:56:56.121202 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:56:56.121208 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:56:56.121214 | orchestrator | 2026-04-02 00:56:56.121220 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-02 00:56:56.121226 | orchestrator | Thursday 02 April 2026 00:55:25 +0000 (0:00:00.341) 0:00:04.734 ******** 2026-04-02 00:56:56.121232 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:56:56.121238 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:56:56.121244 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:56:56.121250 | orchestrator | 2026-04-02 00:56:56.121257 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-02 00:56:56.121263 | orchestrator | Thursday 02 April 2026 00:55:26 +0000 (0:00:00.301) 0:00:05.035 ******** 2026-04-02 00:56:56.121275 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:56:56.121282 | orchestrator | 2026-04-02 00:56:56.121288 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-02 00:56:56.121294 | orchestrator | Thursday 02 April 2026 00:55:26 +0000 (0:00:00.105) 0:00:05.141 ******** 2026-04-02 00:56:56.121300 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:56:56.121306 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:56:56.121312 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:56:56.121318 | orchestrator | 2026-04-02 00:56:56.121324 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-02 00:56:56.121336 | orchestrator | Thursday 02 April 2026 00:55:26 +0000 (0:00:00.241) 0:00:05.383 ******** 2026-04-02 00:56:56.121342 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:56:56.121349 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:56:56.121355 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:56:56.121361 | orchestrator | 2026-04-02 00:56:56.121367 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-02 00:56:56.121373 | orchestrator | Thursday 02 April 2026 00:55:26 +0000 (0:00:00.307) 0:00:05.690 ******** 2026-04-02 00:56:56.121379 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:56:56.121385 | orchestrator | 2026-04-02 00:56:56.121391 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-02 00:56:56.121397 | orchestrator | Thursday 02 April 2026 00:55:26 +0000 (0:00:00.108) 0:00:05.798 ******** 2026-04-02 00:56:56.121403 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:56:56.121410 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:56:56.121416 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:56:56.121422 | orchestrator | 2026-04-02 00:56:56.121428 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-02 00:56:56.121434 | orchestrator | Thursday 02 April 2026 00:55:27 +0000 (0:00:00.342) 0:00:06.140 ******** 2026-04-02 00:56:56.121440 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:56:56.121446 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:56:56.121452 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:56:56.121458 | orchestrator | 2026-04-02 00:56:56.121495 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-02 00:56:56.121502 | orchestrator | Thursday 02 April 2026 00:55:27 +0000 (0:00:00.259) 0:00:06.400 ******** 2026-04-02 00:56:56.121508 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:56:56.121514 | orchestrator | 2026-04-02 00:56:56.121520 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-02 00:56:56.121526 | orchestrator | Thursday 02 April 2026 00:55:27 +0000 (0:00:00.101) 0:00:06.502 ******** 2026-04-02 00:56:56.121533 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:56:56.121539 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:56:56.121545 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:56:56.121551 | orchestrator | 2026-04-02 00:56:56.121557 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-02 00:56:56.121563 | orchestrator | Thursday 02 April 2026 00:55:27 +0000 (0:00:00.236) 0:00:06.738 ******** 2026-04-02 00:56:56.121569 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:56:56.121575 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:56:56.121581 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:56:56.121587 | orchestrator | 2026-04-02 00:56:56.121597 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-02 00:56:56.121603 | orchestrator | Thursday 02 April 2026 00:55:28 +0000 (0:00:00.254) 0:00:06.993 ******** 2026-04-02 00:56:56.121609 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:56:56.121615 | orchestrator | 2026-04-02 00:56:56.121622 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-02 00:56:56.121628 | orchestrator | Thursday 02 April 2026 00:55:28 +0000 (0:00:00.206) 0:00:07.200 ******** 2026-04-02 00:56:56.121634 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:56:56.121640 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:56:56.121646 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:56:56.121658 | orchestrator | 2026-04-02 00:56:56.121664 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-02 00:56:56.121670 | orchestrator | Thursday 02 April 2026 00:55:28 +0000 (0:00:00.261) 0:00:07.461 ******** 2026-04-02 00:56:56.121679 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:56:56.121689 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:56:56.121700 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:56:56.121710 | orchestrator | 2026-04-02 00:56:56.121719 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-02 00:56:56.121735 | orchestrator | Thursday 02 April 2026 00:55:28 +0000 (0:00:00.266) 0:00:07.728 ******** 2026-04-02 00:56:56.121748 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:56:56.121758 | orchestrator | 2026-04-02 00:56:56.121768 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-02 00:56:56.121778 | orchestrator | Thursday 02 April 2026 00:55:28 +0000 (0:00:00.100) 0:00:07.828 ******** 2026-04-02 00:56:56.121788 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:56:56.121798 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:56:56.121808 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:56:56.121820 | orchestrator | 2026-04-02 00:56:56.121830 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-02 00:56:56.121841 | orchestrator | Thursday 02 April 2026 00:55:29 +0000 (0:00:00.251) 0:00:08.079 ******** 2026-04-02 00:56:56.121852 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:56:56.121862 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:56:56.121872 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:56:56.121883 | orchestrator | 2026-04-02 00:56:56.121893 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-02 00:56:56.121905 | orchestrator | Thursday 02 April 2026 00:55:29 +0000 (0:00:00.366) 0:00:08.446 ******** 2026-04-02 00:56:56.121912 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:56:56.121918 | orchestrator | 2026-04-02 00:56:56.121925 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-02 00:56:56.121931 | orchestrator | Thursday 02 April 2026 00:55:29 +0000 (0:00:00.129) 0:00:08.575 ******** 2026-04-02 00:56:56.121937 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:56:56.121943 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:56:56.121949 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:56:56.121955 | orchestrator | 2026-04-02 00:56:56.121961 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-02 00:56:56.121967 | orchestrator | Thursday 02 April 2026 00:55:29 +0000 (0:00:00.329) 0:00:08.904 ******** 2026-04-02 00:56:56.121973 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:56:56.121980 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:56:56.121986 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:56:56.121992 | orchestrator | 2026-04-02 00:56:56.121998 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-02 00:56:56.122004 | orchestrator | Thursday 02 April 2026 00:55:30 +0000 (0:00:00.266) 0:00:09.171 ******** 2026-04-02 00:56:56.122010 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:56:56.122104 | orchestrator | 2026-04-02 00:56:56.122118 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-02 00:56:56.122125 | orchestrator | Thursday 02 April 2026 00:55:30 +0000 (0:00:00.097) 0:00:09.268 ******** 2026-04-02 00:56:56.122131 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:56:56.122137 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:56:56.122143 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:56:56.122150 | orchestrator | 2026-04-02 00:56:56.122156 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-02 00:56:56.122162 | orchestrator | Thursday 02 April 2026 00:55:30 +0000 (0:00:00.248) 0:00:09.517 ******** 2026-04-02 00:56:56.122168 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:56:56.122175 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:56:56.122181 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:56:56.122187 | orchestrator | 2026-04-02 00:56:56.122205 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-02 00:56:56.122212 | orchestrator | Thursday 02 April 2026 00:55:31 +0000 (0:00:00.440) 0:00:09.958 ******** 2026-04-02 00:56:56.122218 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:56:56.122224 | orchestrator | 2026-04-02 00:56:56.122230 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-02 00:56:56.122236 | orchestrator | Thursday 02 April 2026 00:55:31 +0000 (0:00:00.114) 0:00:10.072 ******** 2026-04-02 00:56:56.122243 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:56:56.122249 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:56:56.122255 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:56:56.122262 | orchestrator | 2026-04-02 00:56:56.122268 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-04-02 00:56:56.122274 | orchestrator | Thursday 02 April 2026 00:55:31 +0000 (0:00:00.286) 0:00:10.358 ******** 2026-04-02 00:56:56.122280 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:56:56.122286 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:56:56.122292 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:56:56.122298 | orchestrator | 2026-04-02 00:56:56.122305 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-04-02 00:56:56.122311 | orchestrator | Thursday 02 April 2026 00:55:33 +0000 (0:00:01.677) 0:00:12.036 ******** 2026-04-02 00:56:56.122318 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-02 00:56:56.122328 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-02 00:56:56.122338 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-02 00:56:56.122352 | orchestrator | 2026-04-02 00:56:56.122373 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-04-02 00:56:56.122384 | orchestrator | Thursday 02 April 2026 00:55:34 +0000 (0:00:01.832) 0:00:13.868 ******** 2026-04-02 00:56:56.122394 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-02 00:56:56.122405 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-02 00:56:56.122416 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-02 00:56:56.122426 | orchestrator | 2026-04-02 00:56:56.122436 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-04-02 00:56:56.122445 | orchestrator | Thursday 02 April 2026 00:55:37 +0000 (0:00:02.828) 0:00:16.697 ******** 2026-04-02 00:56:56.122455 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-02 00:56:56.122465 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-02 00:56:56.122475 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-02 00:56:56.122486 | orchestrator | 2026-04-02 00:56:56.122496 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-04-02 00:56:56.122507 | orchestrator | Thursday 02 April 2026 00:55:39 +0000 (0:00:01.877) 0:00:18.575 ******** 2026-04-02 00:56:56.122517 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:56:56.122527 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:56:56.122537 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:56:56.122547 | orchestrator | 2026-04-02 00:56:56.122557 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-04-02 00:56:56.122567 | orchestrator | Thursday 02 April 2026 00:55:40 +0000 (0:00:00.389) 0:00:18.965 ******** 2026-04-02 00:56:56.122578 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:56:56.122589 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:56:56.122600 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:56:56.122610 | orchestrator | 2026-04-02 00:56:56.122620 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-02 00:56:56.122641 | orchestrator | Thursday 02 April 2026 00:55:40 +0000 (0:00:00.374) 0:00:19.339 ******** 2026-04-02 00:56:56.122652 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 00:56:56.122664 | orchestrator | 2026-04-02 00:56:56.122675 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-04-02 00:56:56.122687 | orchestrator | Thursday 02 April 2026 00:55:41 +0000 (0:00:00.810) 0:00:20.149 ******** 2026-04-02 00:56:56.122719 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-02 00:56:56.122733 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-02 00:56:56.122767 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-02 00:56:56.122780 | orchestrator | 2026-04-02 00:56:56.122791 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-04-02 00:56:56.122802 | orchestrator | Thursday 02 April 2026 00:55:42 +0000 (0:00:01.480) 0:00:21.630 ******** 2026-04-02 00:56:56.122820 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-02 00:56:56.122979 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:56:56.123001 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-02 00:56:56.123013 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:56:56.123032 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-02 00:56:56.123077 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:56:56.123090 | orchestrator | 2026-04-02 00:56:56.123101 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-04-02 00:56:56.123112 | orchestrator | Thursday 02 April 2026 00:55:43 +0000 (0:00:00.873) 0:00:22.504 ******** 2026-04-02 00:56:56.123130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-02 00:56:56.123148 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:56:56.123168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-02 00:56:56.123180 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:56:56.123197 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-02 00:56:56.123215 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:56:56.123226 | orchestrator | 2026-04-02 00:56:56.123237 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2026-04-02 00:56:56.123249 | orchestrator | Thursday 02 April 2026 00:55:44 +0000 (0:00:01.068) 0:00:23.573 ******** 2026-04-02 00:56:56.123346 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-02 00:56:56.123368 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-02 00:56:56.123406 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-02 00:56:56.123428 | orchestrator | 2026-04-02 00:56:56.123440 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-02 00:56:56.123459 | orchestrator | Thursday 02 April 2026 00:55:45 +0000 (0:00:01.340) 0:00:24.913 ******** 2026-04-02 00:56:56.123469 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:56:56.123479 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:56:56.123489 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:56:56.123498 | orchestrator | 2026-04-02 00:56:56.123508 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-02 00:56:56.123518 | orchestrator | Thursday 02 April 2026 00:55:46 +0000 (0:00:00.291) 0:00:25.204 ******** 2026-04-02 00:56:56.123535 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 00:56:56.123545 | orchestrator | 2026-04-02 00:56:56.123555 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-04-02 00:56:56.123566 | orchestrator | Thursday 02 April 2026 00:55:46 +0000 (0:00:00.680) 0:00:25.885 ******** 2026-04-02 00:56:56.123576 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:56:56.123586 | orchestrator | 2026-04-02 00:56:56.123596 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-04-02 00:56:56.123606 | orchestrator | Thursday 02 April 2026 00:55:49 +0000 (0:00:02.464) 0:00:28.350 ******** 2026-04-02 00:56:56.123616 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:56:56.123626 | orchestrator | 2026-04-02 00:56:56.123637 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-04-02 00:56:56.123647 | orchestrator | Thursday 02 April 2026 00:55:51 +0000 (0:00:02.458) 0:00:30.808 ******** 2026-04-02 00:56:56.123659 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:56:56.123669 | orchestrator | 2026-04-02 00:56:56.123679 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-04-02 00:56:56.123690 | orchestrator | Thursday 02 April 2026 00:56:08 +0000 (0:00:16.645) 0:00:47.454 ******** 2026-04-02 00:56:56.123699 | orchestrator | 2026-04-02 00:56:56.123709 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-04-02 00:56:56.123720 | orchestrator | Thursday 02 April 2026 00:56:08 +0000 (0:00:00.085) 0:00:47.539 ******** 2026-04-02 00:56:56.123730 | orchestrator | 2026-04-02 00:56:56.123740 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-04-02 00:56:56.123751 | orchestrator | Thursday 02 April 2026 00:56:08 +0000 (0:00:00.062) 0:00:47.602 ******** 2026-04-02 00:56:56.123762 | orchestrator | 2026-04-02 00:56:56.123773 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-04-02 00:56:56.123783 | orchestrator | Thursday 02 April 2026 00:56:08 +0000 (0:00:00.089) 0:00:47.691 ******** 2026-04-02 00:56:56.123794 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:56:56.123804 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:56:56.123816 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:56:56.123826 | orchestrator | 2026-04-02 00:56:56.123837 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-02 00:56:56.123855 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-02 00:56:56.123866 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-04-02 00:56:56.123876 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-04-02 00:56:56.123887 | orchestrator | 2026-04-02 00:56:56.123898 | orchestrator | 2026-04-02 00:56:56.123915 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-02 00:56:56.123926 | orchestrator | Thursday 02 April 2026 00:56:55 +0000 (0:00:46.958) 0:01:34.649 ******** 2026-04-02 00:56:56.123936 | orchestrator | =============================================================================== 2026-04-02 00:56:56.123946 | orchestrator | horizon : Restart horizon container ------------------------------------ 46.96s 2026-04-02 00:56:56.123956 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 16.65s 2026-04-02 00:56:56.123966 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.83s 2026-04-02 00:56:56.123976 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.46s 2026-04-02 00:56:56.123987 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.46s 2026-04-02 00:56:56.123997 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.88s 2026-04-02 00:56:56.124007 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.83s 2026-04-02 00:56:56.124026 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.68s 2026-04-02 00:56:56.124037 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.48s 2026-04-02 00:56:56.124071 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.34s 2026-04-02 00:56:56.124081 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.30s 2026-04-02 00:56:56.124091 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.07s 2026-04-02 00:56:56.124101 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.87s 2026-04-02 00:56:56.124111 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.81s 2026-04-02 00:56:56.124121 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.68s 2026-04-02 00:56:56.124132 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.62s 2026-04-02 00:56:56.124143 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.49s 2026-04-02 00:56:56.124153 | orchestrator | horizon : Update policy file name --------------------------------------- 0.44s 2026-04-02 00:56:56.124171 | orchestrator | horizon : Copying over existing policy file ----------------------------- 0.39s 2026-04-02 00:56:56.124182 | orchestrator | horizon : Copying over custom themes ------------------------------------ 0.37s 2026-04-02 00:56:56.124192 | orchestrator | 2026-04-02 00:56:56 | INFO  | Task bb29f833-7bd7-4614-8dfe-8d9119582ba0 is in state STARTED 2026-04-02 00:56:56.124204 | orchestrator | 2026-04-02 00:56:56 | INFO  | Task 84d0bb8e-f32e-438d-b9da-2dec8f7f7d2e is in state STARTED 2026-04-02 00:56:56.124215 | orchestrator | 2026-04-02 00:56:56 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:56:59.165855 | orchestrator | 2026-04-02 00:56:59 | INFO  | Task bb29f833-7bd7-4614-8dfe-8d9119582ba0 is in state STARTED 2026-04-02 00:56:59.168483 | orchestrator | 2026-04-02 00:56:59 | INFO  | Task 84d0bb8e-f32e-438d-b9da-2dec8f7f7d2e is in state STARTED 2026-04-02 00:56:59.168607 | orchestrator | 2026-04-02 00:56:59 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:57:02.219501 | orchestrator | 2026-04-02 00:57:02 | INFO  | Task bb29f833-7bd7-4614-8dfe-8d9119582ba0 is in state STARTED 2026-04-02 00:57:02.222348 | orchestrator | 2026-04-02 00:57:02 | INFO  | Task 84d0bb8e-f32e-438d-b9da-2dec8f7f7d2e is in state STARTED 2026-04-02 00:57:02.222457 | orchestrator | 2026-04-02 00:57:02 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:57:05.264874 | orchestrator | 2026-04-02 00:57:05 | INFO  | Task bb29f833-7bd7-4614-8dfe-8d9119582ba0 is in state STARTED 2026-04-02 00:57:05.266845 | orchestrator | 2026-04-02 00:57:05 | INFO  | Task 84d0bb8e-f32e-438d-b9da-2dec8f7f7d2e is in state STARTED 2026-04-02 00:57:05.267377 | orchestrator | 2026-04-02 00:57:05 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:57:08.306592 | orchestrator | 2026-04-02 00:57:08 | INFO  | Task bb29f833-7bd7-4614-8dfe-8d9119582ba0 is in state STARTED 2026-04-02 00:57:08.307474 | orchestrator | 2026-04-02 00:57:08 | INFO  | Task 84d0bb8e-f32e-438d-b9da-2dec8f7f7d2e is in state STARTED 2026-04-02 00:57:08.307537 | orchestrator | 2026-04-02 00:57:08 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:57:11.351422 | orchestrator | 2026-04-02 00:57:11 | INFO  | Task bb29f833-7bd7-4614-8dfe-8d9119582ba0 is in state STARTED 2026-04-02 00:57:11.352962 | orchestrator | 2026-04-02 00:57:11 | INFO  | Task 84d0bb8e-f32e-438d-b9da-2dec8f7f7d2e is in state STARTED 2026-04-02 00:57:11.353017 | orchestrator | 2026-04-02 00:57:11 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:57:14.398706 | orchestrator | 2026-04-02 00:57:14 | INFO  | Task bb29f833-7bd7-4614-8dfe-8d9119582ba0 is in state STARTED 2026-04-02 00:57:14.402194 | orchestrator | 2026-04-02 00:57:14 | INFO  | Task 84d0bb8e-f32e-438d-b9da-2dec8f7f7d2e is in state STARTED 2026-04-02 00:57:14.402285 | orchestrator | 2026-04-02 00:57:14 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:57:17.455852 | orchestrator | 2026-04-02 00:57:17 | INFO  | Task bb29f833-7bd7-4614-8dfe-8d9119582ba0 is in state STARTED 2026-04-02 00:57:17.457432 | orchestrator | 2026-04-02 00:57:17 | INFO  | Task 84d0bb8e-f32e-438d-b9da-2dec8f7f7d2e is in state STARTED 2026-04-02 00:57:17.457500 | orchestrator | 2026-04-02 00:57:17 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:57:20.506088 | orchestrator | 2026-04-02 00:57:20 | INFO  | Task bb29f833-7bd7-4614-8dfe-8d9119582ba0 is in state STARTED 2026-04-02 00:57:20.507554 | orchestrator | 2026-04-02 00:57:20 | INFO  | Task 84d0bb8e-f32e-438d-b9da-2dec8f7f7d2e is in state STARTED 2026-04-02 00:57:20.507772 | orchestrator | 2026-04-02 00:57:20 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:57:23.560245 | orchestrator | 2026-04-02 00:57:23 | INFO  | Task bb29f833-7bd7-4614-8dfe-8d9119582ba0 is in state STARTED 2026-04-02 00:57:23.561596 | orchestrator | 2026-04-02 00:57:23 | INFO  | Task 84d0bb8e-f32e-438d-b9da-2dec8f7f7d2e is in state STARTED 2026-04-02 00:57:23.561658 | orchestrator | 2026-04-02 00:57:23 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:57:26.607092 | orchestrator | 2026-04-02 00:57:26 | INFO  | Task bb29f833-7bd7-4614-8dfe-8d9119582ba0 is in state STARTED 2026-04-02 00:57:26.608943 | orchestrator | 2026-04-02 00:57:26 | INFO  | Task 84d0bb8e-f32e-438d-b9da-2dec8f7f7d2e is in state STARTED 2026-04-02 00:57:26.608982 | orchestrator | 2026-04-02 00:57:26 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:57:29.659917 | orchestrator | 2026-04-02 00:57:29 | INFO  | Task cb6e6411-9343-4d6b-9f4f-5d2f3c897e8d is in state STARTED 2026-04-02 00:57:29.660000 | orchestrator | 2026-04-02 00:57:29 | INFO  | Task bb29f833-7bd7-4614-8dfe-8d9119582ba0 is in state STARTED 2026-04-02 00:57:29.662008 | orchestrator | 2026-04-02 00:57:29 | INFO  | Task b6c2922c-0074-481c-ba4e-f014ddd8afcc is in state STARTED 2026-04-02 00:57:29.664127 | orchestrator | 2026-04-02 00:57:29 | INFO  | Task 84d0bb8e-f32e-438d-b9da-2dec8f7f7d2e is in state SUCCESS 2026-04-02 00:57:29.665333 | orchestrator | 2026-04-02 00:57:29 | INFO  | Task 692e7569-a049-4eac-87db-6bc1b90b3008 is in state STARTED 2026-04-02 00:57:29.665461 | orchestrator | 2026-04-02 00:57:29 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:57:32.696723 | orchestrator | 2026-04-02 00:57:32 | INFO  | Task cb6e6411-9343-4d6b-9f4f-5d2f3c897e8d is in state STARTED 2026-04-02 00:57:32.698359 | orchestrator | 2026-04-02 00:57:32 | INFO  | Task bb29f833-7bd7-4614-8dfe-8d9119582ba0 is in state STARTED 2026-04-02 00:57:32.701444 | orchestrator | 2026-04-02 00:57:32 | INFO  | Task b6c2922c-0074-481c-ba4e-f014ddd8afcc is in state SUCCESS 2026-04-02 00:57:32.702478 | orchestrator | 2026-04-02 00:57:32 | INFO  | Task 692e7569-a049-4eac-87db-6bc1b90b3008 is in state STARTED 2026-04-02 00:57:32.702524 | orchestrator | 2026-04-02 00:57:32 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:57:35.735319 | orchestrator | 2026-04-02 00:57:35 | INFO  | Task cb6e6411-9343-4d6b-9f4f-5d2f3c897e8d is in state STARTED 2026-04-02 00:57:35.736477 | orchestrator | 2026-04-02 00:57:35 | INFO  | Task bb29f833-7bd7-4614-8dfe-8d9119582ba0 is in state STARTED 2026-04-02 00:57:35.737372 | orchestrator | 2026-04-02 00:57:35 | INFO  | Task 99836ddc-ee08-453d-9092-e99a1516f08c is in state STARTED 2026-04-02 00:57:35.738301 | orchestrator | 2026-04-02 00:57:35 | INFO  | Task 692e7569-a049-4eac-87db-6bc1b90b3008 is in state STARTED 2026-04-02 00:57:35.738991 | orchestrator | 2026-04-02 00:57:35 | INFO  | Task 07dec8a4-4ad2-49c6-97d3-225141541951 is in state STARTED 2026-04-02 00:57:35.740704 | orchestrator | 2026-04-02 00:57:35 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:57:38.852914 | orchestrator | 2026-04-02 00:57:38 | INFO  | Task cb6e6411-9343-4d6b-9f4f-5d2f3c897e8d is in state STARTED 2026-04-02 00:57:38.853000 | orchestrator | 2026-04-02 00:57:38 | INFO  | Task bb29f833-7bd7-4614-8dfe-8d9119582ba0 is in state STARTED 2026-04-02 00:57:38.853065 | orchestrator | 2026-04-02 00:57:38 | INFO  | Task 99836ddc-ee08-453d-9092-e99a1516f08c is in state STARTED 2026-04-02 00:57:38.853080 | orchestrator | 2026-04-02 00:57:38 | INFO  | Task 692e7569-a049-4eac-87db-6bc1b90b3008 is in state STARTED 2026-04-02 00:57:38.853091 | orchestrator | 2026-04-02 00:57:38 | INFO  | Task 07dec8a4-4ad2-49c6-97d3-225141541951 is in state STARTED 2026-04-02 00:57:38.853104 | orchestrator | 2026-04-02 00:57:38 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:57:41.867358 | orchestrator | 2026-04-02 00:57:41 | INFO  | Task cb6e6411-9343-4d6b-9f4f-5d2f3c897e8d is in state STARTED 2026-04-02 00:57:41.867968 | orchestrator | 2026-04-02 00:57:41 | INFO  | Task bb29f833-7bd7-4614-8dfe-8d9119582ba0 is in state STARTED 2026-04-02 00:57:41.869669 | orchestrator | 2026-04-02 00:57:41 | INFO  | Task 99836ddc-ee08-453d-9092-e99a1516f08c is in state STARTED 2026-04-02 00:57:41.874059 | orchestrator | 2026-04-02 00:57:41 | INFO  | Task 692e7569-a049-4eac-87db-6bc1b90b3008 is in state STARTED 2026-04-02 00:57:41.874118 | orchestrator | 2026-04-02 00:57:41 | INFO  | Task 07dec8a4-4ad2-49c6-97d3-225141541951 is in state STARTED 2026-04-02 00:57:41.874124 | orchestrator | 2026-04-02 00:57:41 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:57:44.916111 | orchestrator | 2026-04-02 00:57:44 | INFO  | Task cb6e6411-9343-4d6b-9f4f-5d2f3c897e8d is in state STARTED 2026-04-02 00:57:44.918660 | orchestrator | 2026-04-02 00:57:44 | INFO  | Task bb29f833-7bd7-4614-8dfe-8d9119582ba0 is in state STARTED 2026-04-02 00:57:44.921826 | orchestrator | 2026-04-02 00:57:44 | INFO  | Task 99836ddc-ee08-453d-9092-e99a1516f08c is in state STARTED 2026-04-02 00:57:44.925105 | orchestrator | 2026-04-02 00:57:44 | INFO  | Task 692e7569-a049-4eac-87db-6bc1b90b3008 is in state STARTED 2026-04-02 00:57:44.927360 | orchestrator | 2026-04-02 00:57:44 | INFO  | Task 07dec8a4-4ad2-49c6-97d3-225141541951 is in state STARTED 2026-04-02 00:57:44.927417 | orchestrator | 2026-04-02 00:57:44 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:57:47.969395 | orchestrator | 2026-04-02 00:57:47 | INFO  | Task cb6e6411-9343-4d6b-9f4f-5d2f3c897e8d is in state STARTED 2026-04-02 00:57:47.969497 | orchestrator | 2026-04-02 00:57:47 | INFO  | Task bb29f833-7bd7-4614-8dfe-8d9119582ba0 is in state STARTED 2026-04-02 00:57:47.972795 | orchestrator | 2026-04-02 00:57:47 | INFO  | Task 99836ddc-ee08-453d-9092-e99a1516f08c is in state STARTED 2026-04-02 00:57:47.975507 | orchestrator | 2026-04-02 00:57:47 | INFO  | Task 692e7569-a049-4eac-87db-6bc1b90b3008 is in state STARTED 2026-04-02 00:57:47.977507 | orchestrator | 2026-04-02 00:57:47 | INFO  | Task 07dec8a4-4ad2-49c6-97d3-225141541951 is in state STARTED 2026-04-02 00:57:47.977572 | orchestrator | 2026-04-02 00:57:47 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:57:51.012070 | orchestrator | 2026-04-02 00:57:51 | INFO  | Task cb6e6411-9343-4d6b-9f4f-5d2f3c897e8d is in state STARTED 2026-04-02 00:57:51.012781 | orchestrator | 2026-04-02 00:57:51 | INFO  | Task bb29f833-7bd7-4614-8dfe-8d9119582ba0 is in state STARTED 2026-04-02 00:57:51.013742 | orchestrator | 2026-04-02 00:57:51 | INFO  | Task 99836ddc-ee08-453d-9092-e99a1516f08c is in state STARTED 2026-04-02 00:57:51.014893 | orchestrator | 2026-04-02 00:57:51 | INFO  | Task 692e7569-a049-4eac-87db-6bc1b90b3008 is in state STARTED 2026-04-02 00:57:51.015583 | orchestrator | 2026-04-02 00:57:51 | INFO  | Task 07dec8a4-4ad2-49c6-97d3-225141541951 is in state STARTED 2026-04-02 00:57:51.015683 | orchestrator | 2026-04-02 00:57:51 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:57:54.053087 | orchestrator | 2026-04-02 00:57:54 | INFO  | Task cb6e6411-9343-4d6b-9f4f-5d2f3c897e8d is in state STARTED 2026-04-02 00:57:54.053855 | orchestrator | 2026-04-02 00:57:54 | INFO  | Task bb29f833-7bd7-4614-8dfe-8d9119582ba0 is in state STARTED 2026-04-02 00:57:54.055650 | orchestrator | 2026-04-02 00:57:54 | INFO  | Task 99836ddc-ee08-453d-9092-e99a1516f08c is in state STARTED 2026-04-02 00:57:54.058182 | orchestrator | 2026-04-02 00:57:54 | INFO  | Task 692e7569-a049-4eac-87db-6bc1b90b3008 is in state STARTED 2026-04-02 00:57:54.059778 | orchestrator | 2026-04-02 00:57:54 | INFO  | Task 07dec8a4-4ad2-49c6-97d3-225141541951 is in state STARTED 2026-04-02 00:57:54.059825 | orchestrator | 2026-04-02 00:57:54 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:57:57.089296 | orchestrator | 2026-04-02 00:57:57 | INFO  | Task cb6e6411-9343-4d6b-9f4f-5d2f3c897e8d is in state STARTED 2026-04-02 00:57:57.089541 | orchestrator | 2026-04-02 00:57:57 | INFO  | Task bb29f833-7bd7-4614-8dfe-8d9119582ba0 is in state STARTED 2026-04-02 00:57:57.090748 | orchestrator | 2026-04-02 00:57:57 | INFO  | Task 99836ddc-ee08-453d-9092-e99a1516f08c is in state STARTED 2026-04-02 00:57:57.091470 | orchestrator | 2026-04-02 00:57:57 | INFO  | Task 692e7569-a049-4eac-87db-6bc1b90b3008 is in state STARTED 2026-04-02 00:57:57.092469 | orchestrator | 2026-04-02 00:57:57 | INFO  | Task 07dec8a4-4ad2-49c6-97d3-225141541951 is in state STARTED 2026-04-02 00:57:57.092507 | orchestrator | 2026-04-02 00:57:57 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:58:00.119818 | orchestrator | 2026-04-02 00:58:00 | INFO  | Task cb6e6411-9343-4d6b-9f4f-5d2f3c897e8d is in state STARTED 2026-04-02 00:58:00.120256 | orchestrator | 2026-04-02 00:58:00 | INFO  | Task bb29f833-7bd7-4614-8dfe-8d9119582ba0 is in state STARTED 2026-04-02 00:58:00.120765 | orchestrator | 2026-04-02 00:58:00 | INFO  | Task 99836ddc-ee08-453d-9092-e99a1516f08c is in state STARTED 2026-04-02 00:58:00.121522 | orchestrator | 2026-04-02 00:58:00 | INFO  | Task 692e7569-a049-4eac-87db-6bc1b90b3008 is in state STARTED 2026-04-02 00:58:00.122353 | orchestrator | 2026-04-02 00:58:00 | INFO  | Task 07dec8a4-4ad2-49c6-97d3-225141541951 is in state STARTED 2026-04-02 00:58:00.122390 | orchestrator | 2026-04-02 00:58:00 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:58:03.146504 | orchestrator | 2026-04-02 00:58:03 | INFO  | Task cb6e6411-9343-4d6b-9f4f-5d2f3c897e8d is in state STARTED 2026-04-02 00:58:03.146605 | orchestrator | 2026-04-02 00:58:03 | INFO  | Task bb29f833-7bd7-4614-8dfe-8d9119582ba0 is in state STARTED 2026-04-02 00:58:03.147173 | orchestrator | 2026-04-02 00:58:03 | INFO  | Task 99836ddc-ee08-453d-9092-e99a1516f08c is in state STARTED 2026-04-02 00:58:03.147519 | orchestrator | 2026-04-02 00:58:03 | INFO  | Task 692e7569-a049-4eac-87db-6bc1b90b3008 is in state STARTED 2026-04-02 00:58:03.148149 | orchestrator | 2026-04-02 00:58:03 | INFO  | Task 07dec8a4-4ad2-49c6-97d3-225141541951 is in state STARTED 2026-04-02 00:58:03.148194 | orchestrator | 2026-04-02 00:58:03 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:58:06.210349 | orchestrator | 2026-04-02 00:58:06 | INFO  | Task cb6e6411-9343-4d6b-9f4f-5d2f3c897e8d is in state STARTED 2026-04-02 00:58:06.210435 | orchestrator | 2026-04-02 00:58:06 | INFO  | Task bb29f833-7bd7-4614-8dfe-8d9119582ba0 is in state SUCCESS 2026-04-02 00:58:06.210953 | orchestrator | 2026-04-02 00:58:06.210976 | orchestrator | 2026-04-02 00:58:06.211021 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-04-02 00:58:06.211027 | orchestrator | 2026-04-02 00:58:06.211031 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-04-02 00:58:06.211035 | orchestrator | Thursday 02 April 2026 00:56:35 +0000 (0:00:00.297) 0:00:00.297 ******** 2026-04-02 00:58:06.211040 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-04-02 00:58:06.211046 | orchestrator | 2026-04-02 00:58:06.211050 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-04-02 00:58:06.211054 | orchestrator | Thursday 02 April 2026 00:56:35 +0000 (0:00:00.214) 0:00:00.511 ******** 2026-04-02 00:58:06.211058 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-04-02 00:58:06.211063 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-04-02 00:58:06.211068 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-04-02 00:58:06.211072 | orchestrator | 2026-04-02 00:58:06.211075 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-04-02 00:58:06.211079 | orchestrator | Thursday 02 April 2026 00:56:37 +0000 (0:00:01.602) 0:00:02.114 ******** 2026-04-02 00:58:06.211083 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-04-02 00:58:06.211087 | orchestrator | 2026-04-02 00:58:06.211091 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-04-02 00:58:06.211094 | orchestrator | Thursday 02 April 2026 00:56:38 +0000 (0:00:01.181) 0:00:03.296 ******** 2026-04-02 00:58:06.211098 | orchestrator | changed: [testbed-manager] 2026-04-02 00:58:06.211103 | orchestrator | 2026-04-02 00:58:06.211107 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-04-02 00:58:06.211111 | orchestrator | Thursday 02 April 2026 00:56:39 +0000 (0:00:00.883) 0:00:04.180 ******** 2026-04-02 00:58:06.211117 | orchestrator | changed: [testbed-manager] 2026-04-02 00:58:06.211123 | orchestrator | 2026-04-02 00:58:06.211128 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-04-02 00:58:06.211134 | orchestrator | Thursday 02 April 2026 00:56:40 +0000 (0:00:00.891) 0:00:05.071 ******** 2026-04-02 00:58:06.211139 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-04-02 00:58:06.211145 | orchestrator | ok: [testbed-manager] 2026-04-02 00:58:06.211151 | orchestrator | 2026-04-02 00:58:06.211156 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-04-02 00:58:06.211161 | orchestrator | Thursday 02 April 2026 00:57:17 +0000 (0:00:37.765) 0:00:42.837 ******** 2026-04-02 00:58:06.211167 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-04-02 00:58:06.211173 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-04-02 00:58:06.211178 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-04-02 00:58:06.211183 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-04-02 00:58:06.211189 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-04-02 00:58:06.211195 | orchestrator | 2026-04-02 00:58:06.211201 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-04-02 00:58:06.211206 | orchestrator | Thursday 02 April 2026 00:57:21 +0000 (0:00:04.073) 0:00:46.910 ******** 2026-04-02 00:58:06.211344 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-04-02 00:58:06.211353 | orchestrator | 2026-04-02 00:58:06.211357 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-04-02 00:58:06.211561 | orchestrator | Thursday 02 April 2026 00:57:22 +0000 (0:00:00.628) 0:00:47.539 ******** 2026-04-02 00:58:06.211568 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:58:06.211573 | orchestrator | 2026-04-02 00:58:06.211579 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-04-02 00:58:06.211584 | orchestrator | Thursday 02 April 2026 00:57:22 +0000 (0:00:00.127) 0:00:47.666 ******** 2026-04-02 00:58:06.211590 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:58:06.211596 | orchestrator | 2026-04-02 00:58:06.211601 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-04-02 00:58:06.211607 | orchestrator | Thursday 02 April 2026 00:57:23 +0000 (0:00:00.288) 0:00:47.955 ******** 2026-04-02 00:58:06.211612 | orchestrator | changed: [testbed-manager] 2026-04-02 00:58:06.211689 | orchestrator | 2026-04-02 00:58:06.211698 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-04-02 00:58:06.211705 | orchestrator | Thursday 02 April 2026 00:57:24 +0000 (0:00:01.346) 0:00:49.301 ******** 2026-04-02 00:58:06.211711 | orchestrator | changed: [testbed-manager] 2026-04-02 00:58:06.211718 | orchestrator | 2026-04-02 00:58:06.211724 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-04-02 00:58:06.211730 | orchestrator | Thursday 02 April 2026 00:57:25 +0000 (0:00:00.754) 0:00:50.055 ******** 2026-04-02 00:58:06.211736 | orchestrator | changed: [testbed-manager] 2026-04-02 00:58:06.211742 | orchestrator | 2026-04-02 00:58:06.211748 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-04-02 00:58:06.211755 | orchestrator | Thursday 02 April 2026 00:57:25 +0000 (0:00:00.571) 0:00:50.626 ******** 2026-04-02 00:58:06.211762 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-04-02 00:58:06.211769 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-04-02 00:58:06.211776 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-04-02 00:58:06.211783 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-04-02 00:58:06.211789 | orchestrator | 2026-04-02 00:58:06.211795 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-02 00:58:06.211802 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-02 00:58:06.211810 | orchestrator | 2026-04-02 00:58:06.211817 | orchestrator | 2026-04-02 00:58:06.211847 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-02 00:58:06.211864 | orchestrator | Thursday 02 April 2026 00:57:27 +0000 (0:00:01.373) 0:00:52.000 ******** 2026-04-02 00:58:06.211871 | orchestrator | =============================================================================== 2026-04-02 00:58:06.211878 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 37.77s 2026-04-02 00:58:06.211884 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.07s 2026-04-02 00:58:06.211890 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.60s 2026-04-02 00:58:06.211896 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.37s 2026-04-02 00:58:06.211902 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.35s 2026-04-02 00:58:06.211908 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.18s 2026-04-02 00:58:06.211914 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.89s 2026-04-02 00:58:06.211921 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.88s 2026-04-02 00:58:06.211927 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.75s 2026-04-02 00:58:06.211933 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.63s 2026-04-02 00:58:06.211939 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.57s 2026-04-02 00:58:06.211945 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.29s 2026-04-02 00:58:06.211952 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.21s 2026-04-02 00:58:06.211969 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.13s 2026-04-02 00:58:06.211976 | orchestrator | 2026-04-02 00:58:06.211982 | orchestrator | 2026-04-02 00:58:06.212005 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-02 00:58:06.212011 | orchestrator | 2026-04-02 00:58:06.212017 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-02 00:58:06.212023 | orchestrator | Thursday 02 April 2026 00:57:30 +0000 (0:00:00.182) 0:00:00.182 ******** 2026-04-02 00:58:06.212030 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:58:06.212036 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:58:06.212042 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:58:06.212048 | orchestrator | 2026-04-02 00:58:06.212055 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-02 00:58:06.212060 | orchestrator | Thursday 02 April 2026 00:57:30 +0000 (0:00:00.313) 0:00:00.495 ******** 2026-04-02 00:58:06.212067 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-04-02 00:58:06.212073 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-04-02 00:58:06.212079 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-04-02 00:58:06.212085 | orchestrator | 2026-04-02 00:58:06.212092 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2026-04-02 00:58:06.212098 | orchestrator | 2026-04-02 00:58:06.212104 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2026-04-02 00:58:06.212111 | orchestrator | Thursday 02 April 2026 00:57:30 +0000 (0:00:00.458) 0:00:00.954 ******** 2026-04-02 00:58:06.212117 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:58:06.212123 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:58:06.212130 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:58:06.212136 | orchestrator | 2026-04-02 00:58:06.212142 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-02 00:58:06.212150 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-02 00:58:06.212157 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-02 00:58:06.212164 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-02 00:58:06.212170 | orchestrator | 2026-04-02 00:58:06.212176 | orchestrator | 2026-04-02 00:58:06.212182 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-02 00:58:06.212189 | orchestrator | Thursday 02 April 2026 00:57:31 +0000 (0:00:01.023) 0:00:01.978 ******** 2026-04-02 00:58:06.212195 | orchestrator | =============================================================================== 2026-04-02 00:58:06.212201 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 1.02s 2026-04-02 00:58:06.212207 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.46s 2026-04-02 00:58:06.212214 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.31s 2026-04-02 00:58:06.212220 | orchestrator | 2026-04-02 00:58:06.212226 | orchestrator | 2026-04-02 00:58:06.212232 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-02 00:58:06.212239 | orchestrator | 2026-04-02 00:58:06.212245 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-02 00:58:06.212251 | orchestrator | Thursday 02 April 2026 00:55:21 +0000 (0:00:00.271) 0:00:00.271 ******** 2026-04-02 00:58:06.212257 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:58:06.212263 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:58:06.212269 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:58:06.212276 | orchestrator | 2026-04-02 00:58:06.212282 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-02 00:58:06.212289 | orchestrator | Thursday 02 April 2026 00:55:21 +0000 (0:00:00.234) 0:00:00.505 ******** 2026-04-02 00:58:06.212302 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-04-02 00:58:06.212308 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-04-02 00:58:06.212315 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-04-02 00:58:06.212321 | orchestrator | 2026-04-02 00:58:06.212327 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-04-02 00:58:06.212333 | orchestrator | 2026-04-02 00:58:06.212363 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-02 00:58:06.212371 | orchestrator | Thursday 02 April 2026 00:55:22 +0000 (0:00:00.264) 0:00:00.770 ******** 2026-04-02 00:58:06.212378 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 00:58:06.212384 | orchestrator | 2026-04-02 00:58:06.212390 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-04-02 00:58:06.212397 | orchestrator | Thursday 02 April 2026 00:55:22 +0000 (0:00:00.541) 0:00:01.312 ******** 2026-04-02 00:58:06.212408 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-02 00:58:06.212418 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-02 00:58:06.212427 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-02 00:58:06.212439 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-02 00:58:06.212468 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-02 00:58:06.212475 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-02 00:58:06.212483 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-02 00:58:06.212489 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-02 00:58:06.212496 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-02 00:58:06.212503 | orchestrator | 2026-04-02 00:58:06.212509 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-04-02 00:58:06.212515 | orchestrator | Thursday 02 April 2026 00:55:24 +0000 (0:00:01.913) 0:00:03.225 ******** 2026-04-02 00:58:06.212526 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:58:06.212533 | orchestrator | 2026-04-02 00:58:06.212539 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-04-02 00:58:06.212546 | orchestrator | Thursday 02 April 2026 00:55:24 +0000 (0:00:00.099) 0:00:03.325 ******** 2026-04-02 00:58:06.212552 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:58:06.212558 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:58:06.212565 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:58:06.212571 | orchestrator | 2026-04-02 00:58:06.212577 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-04-02 00:58:06.212584 | orchestrator | Thursday 02 April 2026 00:55:24 +0000 (0:00:00.234) 0:00:03.559 ******** 2026-04-02 00:58:06.212590 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-02 00:58:06.212597 | orchestrator | 2026-04-02 00:58:06.212602 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-02 00:58:06.212608 | orchestrator | Thursday 02 April 2026 00:55:25 +0000 (0:00:00.767) 0:00:04.326 ******** 2026-04-02 00:58:06.212614 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 00:58:06.212620 | orchestrator | 2026-04-02 00:58:06.212626 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-04-02 00:58:06.212639 | orchestrator | Thursday 02 April 2026 00:55:26 +0000 (0:00:00.560) 0:00:04.887 ******** 2026-04-02 00:58:06.212647 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-02 00:58:06.212654 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-02 00:58:06.212661 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-02 00:58:06.212673 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-02 00:58:06.212691 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-02 00:58:06.212698 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-02 00:58:06.212704 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-02 00:58:06.212710 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-02 00:58:06.212717 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-02 00:58:06.212736 | orchestrator | 2026-04-02 00:58:06.212743 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-04-02 00:58:06.212749 | orchestrator | Thursday 02 April 2026 00:55:29 +0000 (0:00:03.218) 0:00:08.105 ******** 2026-04-02 00:58:06.212756 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-02 00:58:06.212771 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-02 00:58:06.212779 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-02 00:58:06.212785 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:58:06.212792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-02 00:58:06.212798 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-02 00:58:06.212810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-02 00:58:06.212817 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:58:06.212833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-02 00:58:06.212840 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-02 00:58:06.212847 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-02 00:58:06.212853 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:58:06.212860 | orchestrator | 2026-04-02 00:58:06.212866 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-04-02 00:58:06.212872 | orchestrator | Thursday 02 April 2026 00:55:29 +0000 (0:00:00.503) 0:00:08.609 ******** 2026-04-02 00:58:06.212879 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-02 00:58:06.212892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-02 00:58:06.212898 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-02 00:58:06.212905 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:58:06.212920 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-02 00:58:06.212927 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-02 00:58:06.212934 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-02 00:58:06.212946 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:58:06.212952 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-02 00:58:06.212959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-02 00:58:06.212972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-02 00:58:06.212979 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:58:06.212986 | orchestrator | 2026-04-02 00:58:06.213167 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-04-02 00:58:06.213172 | orchestrator | Thursday 02 April 2026 00:55:30 +0000 (0:00:00.907) 0:00:09.517 ******** 2026-04-02 00:58:06.213177 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-02 00:58:06.213190 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-02 00:58:06.213194 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-02 00:58:06.213205 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-02 00:58:06.213216 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-02 00:58:06.213220 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-02 00:58:06.213228 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-02 00:58:06.213232 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-02 00:58:06.213236 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-02 00:58:06.213240 | orchestrator | 2026-04-02 00:58:06.213244 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-04-02 00:58:06.213248 | orchestrator | Thursday 02 April 2026 00:55:34 +0000 (0:00:03.337) 0:00:12.854 ******** 2026-04-02 00:58:06.213259 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-02 00:58:06.213264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-02 00:58:06.213268 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-02 00:58:06.213276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-02 00:58:06.213280 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-02 00:58:06.213284 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-02 00:58:06.213295 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-02 00:58:06.213299 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-02 00:58:06.213307 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-02 00:58:06.213311 | orchestrator | 2026-04-02 00:58:06.213315 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-04-02 00:58:06.213319 | orchestrator | Thursday 02 April 2026 00:55:39 +0000 (0:00:05.581) 0:00:18.436 ******** 2026-04-02 00:58:06.213323 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:58:06.213327 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:58:06.213330 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:58:06.213334 | orchestrator | 2026-04-02 00:58:06.213338 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-04-02 00:58:06.213342 | orchestrator | Thursday 02 April 2026 00:55:41 +0000 (0:00:01.417) 0:00:19.853 ******** 2026-04-02 00:58:06.213345 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:58:06.213349 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:58:06.213353 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:58:06.213357 | orchestrator | 2026-04-02 00:58:06.213361 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-04-02 00:58:06.213365 | orchestrator | Thursday 02 April 2026 00:55:42 +0000 (0:00:01.063) 0:00:20.917 ******** 2026-04-02 00:58:06.213369 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:58:06.213372 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:58:06.213376 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:58:06.213380 | orchestrator | 2026-04-02 00:58:06.213384 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-04-02 00:58:06.213387 | orchestrator | Thursday 02 April 2026 00:55:42 +0000 (0:00:00.276) 0:00:21.194 ******** 2026-04-02 00:58:06.213391 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:58:06.213395 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:58:06.213399 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:58:06.213402 | orchestrator | 2026-04-02 00:58:06.213406 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-04-02 00:58:06.213410 | orchestrator | Thursday 02 April 2026 00:55:42 +0000 (0:00:00.265) 0:00:21.459 ******** 2026-04-02 00:58:06.213414 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-02 00:58:06.213426 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-02 00:58:06.213433 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-02 00:58:06.213437 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:58:06.213442 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-02 00:58:06.213446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-02 00:58:06.213450 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-02 00:58:06.213454 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:58:06.213464 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-02 00:58:06.213472 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-02 00:58:06.213476 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-02 00:58:06.213480 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:58:06.213484 | orchestrator | 2026-04-02 00:58:06.213487 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-02 00:58:06.213491 | orchestrator | Thursday 02 April 2026 00:55:43 +0000 (0:00:00.687) 0:00:22.147 ******** 2026-04-02 00:58:06.213495 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:58:06.213499 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:58:06.213503 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:58:06.213506 | orchestrator | 2026-04-02 00:58:06.213510 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-04-02 00:58:06.213514 | orchestrator | Thursday 02 April 2026 00:55:43 +0000 (0:00:00.464) 0:00:22.612 ******** 2026-04-02 00:58:06.213518 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-02 00:58:06.213522 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-02 00:58:06.213526 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-02 00:58:06.213530 | orchestrator | 2026-04-02 00:58:06.213533 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-04-02 00:58:06.213537 | orchestrator | Thursday 02 April 2026 00:55:45 +0000 (0:00:01.791) 0:00:24.404 ******** 2026-04-02 00:58:06.213541 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-02 00:58:06.213545 | orchestrator | 2026-04-02 00:58:06.213549 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-04-02 00:58:06.213552 | orchestrator | Thursday 02 April 2026 00:55:46 +0000 (0:00:00.965) 0:00:25.369 ******** 2026-04-02 00:58:06.213556 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:58:06.213560 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:58:06.213564 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:58:06.213567 | orchestrator | 2026-04-02 00:58:06.213571 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-04-02 00:58:06.213575 | orchestrator | Thursday 02 April 2026 00:55:47 +0000 (0:00:00.522) 0:00:25.891 ******** 2026-04-02 00:58:06.213583 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-02 00:58:06.213587 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-02 00:58:06.213591 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-02 00:58:06.213595 | orchestrator | 2026-04-02 00:58:06.213599 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-04-02 00:58:06.213602 | orchestrator | Thursday 02 April 2026 00:55:48 +0000 (0:00:01.112) 0:00:27.004 ******** 2026-04-02 00:58:06.213606 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:58:06.213610 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:58:06.213614 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:58:06.213618 | orchestrator | 2026-04-02 00:58:06.213622 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-04-02 00:58:06.213625 | orchestrator | Thursday 02 April 2026 00:55:48 +0000 (0:00:00.454) 0:00:27.459 ******** 2026-04-02 00:58:06.213629 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-02 00:58:06.213633 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-02 00:58:06.213637 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-02 00:58:06.213640 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-02 00:58:06.213644 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-02 00:58:06.213654 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-02 00:58:06.213658 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-02 00:58:06.213662 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-02 00:58:06.213666 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-02 00:58:06.213669 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-02 00:58:06.213673 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-02 00:58:06.213677 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-02 00:58:06.213681 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-02 00:58:06.213685 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-02 00:58:06.213688 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-02 00:58:06.213692 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-02 00:58:06.213696 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-02 00:58:06.213700 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-02 00:58:06.213704 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-02 00:58:06.213708 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-02 00:58:06.213711 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-02 00:58:06.213715 | orchestrator | 2026-04-02 00:58:06.213719 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-04-02 00:58:06.213723 | orchestrator | Thursday 02 April 2026 00:55:58 +0000 (0:00:09.179) 0:00:36.639 ******** 2026-04-02 00:58:06.213726 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-02 00:58:06.213730 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-02 00:58:06.213737 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-02 00:58:06.213741 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-02 00:58:06.213745 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-02 00:58:06.213749 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-02 00:58:06.213753 | orchestrator | 2026-04-02 00:58:06.213756 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2026-04-02 00:58:06.213760 | orchestrator | Thursday 02 April 2026 00:56:00 +0000 (0:00:02.391) 0:00:39.030 ******** 2026-04-02 00:58:06.213764 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-02 00:58:06.213774 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout':2026-04-02 00:58:06 | INFO  | Task 99836ddc-ee08-453d-9092-e99a1516f08c is in state STARTED 2026-04-02 00:58:06.213779 | orchestrator | 2026-04-02 00:58:06 | INFO  | Task 692e7569-a049-4eac-87db-6bc1b90b3008 is in state STARTED 2026-04-02 00:58:06.213783 | orchestrator | 2026-04-02 00:58:06 | INFO  | Task 07dec8a4-4ad2-49c6-97d3-225141541951 is in state STARTED 2026-04-02 00:58:06.213787 | orchestrator | 2026-04-02 00:58:06 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:58:06.213792 | orchestrator | '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-02 00:58:06.213798 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-02 00:58:06.213806 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-02 00:58:06.213810 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-02 00:58:06.213814 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-02 00:58:06.213825 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-02 00:58:06.213829 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-02 00:58:06.213833 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-02 00:58:06.213841 | orchestrator | 2026-04-02 00:58:06.213844 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-02 00:58:06.213848 | orchestrator | Thursday 02 April 2026 00:56:02 +0000 (0:00:02.286) 0:00:41.317 ******** 2026-04-02 00:58:06.213852 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:58:06.213856 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:58:06.213860 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:58:06.213863 | orchestrator | 2026-04-02 00:58:06.213867 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-04-02 00:58:06.213871 | orchestrator | Thursday 02 April 2026 00:56:03 +0000 (0:00:00.463) 0:00:41.780 ******** 2026-04-02 00:58:06.213875 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:58:06.213878 | orchestrator | 2026-04-02 00:58:06.213882 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-04-02 00:58:06.213886 | orchestrator | Thursday 02 April 2026 00:56:05 +0000 (0:00:02.588) 0:00:44.369 ******** 2026-04-02 00:58:06.213890 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:58:06.213893 | orchestrator | 2026-04-02 00:58:06.213897 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-04-02 00:58:06.213901 | orchestrator | Thursday 02 April 2026 00:56:08 +0000 (0:00:02.627) 0:00:46.997 ******** 2026-04-02 00:58:06.213905 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:58:06.213908 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:58:06.213912 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:58:06.213916 | orchestrator | 2026-04-02 00:58:06.213920 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-04-02 00:58:06.213924 | orchestrator | Thursday 02 April 2026 00:56:09 +0000 (0:00:00.920) 0:00:47.917 ******** 2026-04-02 00:58:06.213927 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:58:06.213931 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:58:06.213935 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:58:06.213938 | orchestrator | 2026-04-02 00:58:06.213942 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-04-02 00:58:06.213949 | orchestrator | Thursday 02 April 2026 00:56:09 +0000 (0:00:00.384) 0:00:48.302 ******** 2026-04-02 00:58:06.213955 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:58:06.213961 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:58:06.213970 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:58:06.213977 | orchestrator | 2026-04-02 00:58:06.214076 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-04-02 00:58:06.214086 | orchestrator | Thursday 02 April 2026 00:56:10 +0000 (0:00:00.642) 0:00:48.945 ******** 2026-04-02 00:58:06.214092 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:58:06.214097 | orchestrator | 2026-04-02 00:58:06.214103 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2026-04-02 00:58:06.214108 | orchestrator | Thursday 02 April 2026 00:56:26 +0000 (0:00:16.379) 0:01:05.325 ******** 2026-04-02 00:58:06.214114 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:58:06.214119 | orchestrator | 2026-04-02 00:58:06.214124 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-04-02 00:58:06.214130 | orchestrator | Thursday 02 April 2026 00:56:38 +0000 (0:00:11.909) 0:01:17.234 ******** 2026-04-02 00:58:06.214135 | orchestrator | 2026-04-02 00:58:06.214166 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-04-02 00:58:06.214172 | orchestrator | Thursday 02 April 2026 00:56:38 +0000 (0:00:00.062) 0:01:17.297 ******** 2026-04-02 00:58:06.214177 | orchestrator | 2026-04-02 00:58:06.214183 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-04-02 00:58:06.214190 | orchestrator | Thursday 02 April 2026 00:56:38 +0000 (0:00:00.062) 0:01:17.360 ******** 2026-04-02 00:58:06.214196 | orchestrator | 2026-04-02 00:58:06.214202 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-04-02 00:58:06.214208 | orchestrator | Thursday 02 April 2026 00:56:38 +0000 (0:00:00.071) 0:01:17.432 ******** 2026-04-02 00:58:06.214222 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:58:06.214228 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:58:06.214234 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:58:06.214240 | orchestrator | 2026-04-02 00:58:06.214257 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-04-02 00:58:06.214263 | orchestrator | Thursday 02 April 2026 00:56:48 +0000 (0:00:10.096) 0:01:27.528 ******** 2026-04-02 00:58:06.214269 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:58:06.214275 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:58:06.214281 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:58:06.214287 | orchestrator | 2026-04-02 00:58:06.214293 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-04-02 00:58:06.214299 | orchestrator | Thursday 02 April 2026 00:56:53 +0000 (0:00:04.988) 0:01:32.516 ******** 2026-04-02 00:58:06.214305 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:58:06.214312 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:58:06.214317 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:58:06.214321 | orchestrator | 2026-04-02 00:58:06.214324 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-02 00:58:06.214328 | orchestrator | Thursday 02 April 2026 00:57:05 +0000 (0:00:11.387) 0:01:43.904 ******** 2026-04-02 00:58:06.214332 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 00:58:06.214336 | orchestrator | 2026-04-02 00:58:06.214339 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2026-04-02 00:58:06.214343 | orchestrator | Thursday 02 April 2026 00:57:05 +0000 (0:00:00.523) 0:01:44.428 ******** 2026-04-02 00:58:06.214347 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:58:06.214351 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:58:06.214354 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:58:06.214358 | orchestrator | 2026-04-02 00:58:06.214362 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2026-04-02 00:58:06.214365 | orchestrator | Thursday 02 April 2026 00:57:06 +0000 (0:00:00.740) 0:01:45.169 ******** 2026-04-02 00:58:06.214369 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:58:06.214373 | orchestrator | 2026-04-02 00:58:06.214376 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2026-04-02 00:58:06.214380 | orchestrator | Thursday 02 April 2026 00:57:08 +0000 (0:00:01.706) 0:01:46.875 ******** 2026-04-02 00:58:06.214384 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2026-04-02 00:58:06.214388 | orchestrator | 2026-04-02 00:58:06.214391 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2026-04-02 00:58:06.214395 | orchestrator | Thursday 02 April 2026 00:57:21 +0000 (0:00:13.002) 0:01:59.878 ******** 2026-04-02 00:58:06.214399 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2026-04-02 00:58:06.214402 | orchestrator | 2026-04-02 00:58:06.214406 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2026-04-02 00:58:06.214410 | orchestrator | Thursday 02 April 2026 00:57:51 +0000 (0:00:29.871) 0:02:29.750 ******** 2026-04-02 00:58:06.214414 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2026-04-02 00:58:06.214418 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2026-04-02 00:58:06.214422 | orchestrator | 2026-04-02 00:58:06.214426 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2026-04-02 00:58:06.214430 | orchestrator | Thursday 02 April 2026 00:57:58 +0000 (0:00:07.610) 0:02:37.360 ******** 2026-04-02 00:58:06.214433 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:58:06.214437 | orchestrator | 2026-04-02 00:58:06.214441 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2026-04-02 00:58:06.214445 | orchestrator | Thursday 02 April 2026 00:57:58 +0000 (0:00:00.234) 0:02:37.595 ******** 2026-04-02 00:58:06.214448 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:58:06.214457 | orchestrator | 2026-04-02 00:58:06.214461 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2026-04-02 00:58:06.214464 | orchestrator | Thursday 02 April 2026 00:57:59 +0000 (0:00:00.233) 0:02:37.829 ******** 2026-04-02 00:58:06.214468 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:58:06.214472 | orchestrator | 2026-04-02 00:58:06.214476 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2026-04-02 00:58:06.214479 | orchestrator | Thursday 02 April 2026 00:57:59 +0000 (0:00:00.206) 0:02:38.036 ******** 2026-04-02 00:58:06.214483 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:58:06.214487 | orchestrator | 2026-04-02 00:58:06.214491 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2026-04-02 00:58:06.214494 | orchestrator | Thursday 02 April 2026 00:58:00 +0000 (0:00:00.862) 0:02:38.899 ******** 2026-04-02 00:58:06.214498 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:58:06.214502 | orchestrator | 2026-04-02 00:58:06.214506 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-02 00:58:06.214510 | orchestrator | Thursday 02 April 2026 00:58:04 +0000 (0:00:03.858) 0:02:42.757 ******** 2026-04-02 00:58:06.214513 | orchestrator | skipping: [testbed-node-0] 2026-04-02 00:58:06.214517 | orchestrator | skipping: [testbed-node-1] 2026-04-02 00:58:06.214521 | orchestrator | skipping: [testbed-node-2] 2026-04-02 00:58:06.214525 | orchestrator | 2026-04-02 00:58:06.214528 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-02 00:58:06.214533 | orchestrator | testbed-node-0 : ok=33  changed=19  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-02 00:58:06.214538 | orchestrator | testbed-node-1 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-02 00:58:06.214542 | orchestrator | testbed-node-2 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-02 00:58:06.214545 | orchestrator | 2026-04-02 00:58:06.214549 | orchestrator | 2026-04-02 00:58:06.214553 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-02 00:58:06.214563 | orchestrator | Thursday 02 April 2026 00:58:05 +0000 (0:00:01.009) 0:02:43.766 ******** 2026-04-02 00:58:06.214567 | orchestrator | =============================================================================== 2026-04-02 00:58:06.214570 | orchestrator | service-ks-register : keystone | Creating services --------------------- 29.87s 2026-04-02 00:58:06.214574 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 16.38s 2026-04-02 00:58:06.214578 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 13.00s 2026-04-02 00:58:06.214582 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 11.91s 2026-04-02 00:58:06.214586 | orchestrator | keystone : Restart keystone container ---------------------------------- 11.39s 2026-04-02 00:58:06.214589 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 10.10s 2026-04-02 00:58:06.214593 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 9.18s 2026-04-02 00:58:06.214597 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 7.61s 2026-04-02 00:58:06.214601 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.58s 2026-04-02 00:58:06.214604 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 4.99s 2026-04-02 00:58:06.214608 | orchestrator | keystone : Creating default user role ----------------------------------- 3.86s 2026-04-02 00:58:06.214612 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.34s 2026-04-02 00:58:06.214616 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.22s 2026-04-02 00:58:06.214620 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.63s 2026-04-02 00:58:06.214623 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.59s 2026-04-02 00:58:06.214633 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.39s 2026-04-02 00:58:06.214637 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.29s 2026-04-02 00:58:06.214641 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.91s 2026-04-02 00:58:06.214645 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.79s 2026-04-02 00:58:06.214649 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.71s 2026-04-02 00:58:09.202283 | orchestrator | 2026-04-02 00:58:09 | INFO  | Task cb6e6411-9343-4d6b-9f4f-5d2f3c897e8d is in state STARTED 2026-04-02 00:58:09.202420 | orchestrator | 2026-04-02 00:58:09 | INFO  | Task 99836ddc-ee08-453d-9092-e99a1516f08c is in state STARTED 2026-04-02 00:58:09.202439 | orchestrator | 2026-04-02 00:58:09 | INFO  | Task 8a268617-d281-42ee-af83-7f8954f88933 is in state STARTED 2026-04-02 00:58:09.203419 | orchestrator | 2026-04-02 00:58:09 | INFO  | Task 692e7569-a049-4eac-87db-6bc1b90b3008 is in state STARTED 2026-04-02 00:58:09.203907 | orchestrator | 2026-04-02 00:58:09 | INFO  | Task 07dec8a4-4ad2-49c6-97d3-225141541951 is in state STARTED 2026-04-02 00:58:09.203958 | orchestrator | 2026-04-02 00:58:09 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:58:12.249204 | orchestrator | 2026-04-02 00:58:12 | INFO  | Task cb6e6411-9343-4d6b-9f4f-5d2f3c897e8d is in state STARTED 2026-04-02 00:58:12.249386 | orchestrator | 2026-04-02 00:58:12 | INFO  | Task 99836ddc-ee08-453d-9092-e99a1516f08c is in state STARTED 2026-04-02 00:58:12.251123 | orchestrator | 2026-04-02 00:58:12 | INFO  | Task 8a268617-d281-42ee-af83-7f8954f88933 is in state STARTED 2026-04-02 00:58:12.251961 | orchestrator | 2026-04-02 00:58:12 | INFO  | Task 692e7569-a049-4eac-87db-6bc1b90b3008 is in state STARTED 2026-04-02 00:58:12.254967 | orchestrator | 2026-04-02 00:58:12 | INFO  | Task 07dec8a4-4ad2-49c6-97d3-225141541951 is in state STARTED 2026-04-02 00:58:12.255353 | orchestrator | 2026-04-02 00:58:12 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:58:15.307376 | orchestrator | 2026-04-02 00:58:15 | INFO  | Task cb6e6411-9343-4d6b-9f4f-5d2f3c897e8d is in state STARTED 2026-04-02 00:58:15.307480 | orchestrator | 2026-04-02 00:58:15 | INFO  | Task 99836ddc-ee08-453d-9092-e99a1516f08c is in state STARTED 2026-04-02 00:58:15.308920 | orchestrator | 2026-04-02 00:58:15 | INFO  | Task 8a268617-d281-42ee-af83-7f8954f88933 is in state STARTED 2026-04-02 00:58:15.309598 | orchestrator | 2026-04-02 00:58:15 | INFO  | Task 692e7569-a049-4eac-87db-6bc1b90b3008 is in state STARTED 2026-04-02 00:58:15.311665 | orchestrator | 2026-04-02 00:58:15 | INFO  | Task 07dec8a4-4ad2-49c6-97d3-225141541951 is in state STARTED 2026-04-02 00:58:15.311717 | orchestrator | 2026-04-02 00:58:15 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:58:18.353697 | orchestrator | 2026-04-02 00:58:18 | INFO  | Task cb6e6411-9343-4d6b-9f4f-5d2f3c897e8d is in state STARTED 2026-04-02 00:58:18.354346 | orchestrator | 2026-04-02 00:58:18 | INFO  | Task 99836ddc-ee08-453d-9092-e99a1516f08c is in state SUCCESS 2026-04-02 00:58:18.356406 | orchestrator | 2026-04-02 00:58:18 | INFO  | Task 8a268617-d281-42ee-af83-7f8954f88933 is in state STARTED 2026-04-02 00:58:18.358104 | orchestrator | 2026-04-02 00:58:18 | INFO  | Task 692e7569-a049-4eac-87db-6bc1b90b3008 is in state STARTED 2026-04-02 00:58:18.359170 | orchestrator | 2026-04-02 00:58:18 | INFO  | Task 07dec8a4-4ad2-49c6-97d3-225141541951 is in state STARTED 2026-04-02 00:58:18.359548 | orchestrator | 2026-04-02 00:58:18 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:58:21.423097 | orchestrator | 2026-04-02 00:58:21 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 00:58:21.423192 | orchestrator | 2026-04-02 00:58:21 | INFO  | Task cb6e6411-9343-4d6b-9f4f-5d2f3c897e8d is in state STARTED 2026-04-02 00:58:21.424892 | orchestrator | 2026-04-02 00:58:21 | INFO  | Task 8a268617-d281-42ee-af83-7f8954f88933 is in state STARTED 2026-04-02 00:58:21.427601 | orchestrator | 2026-04-02 00:58:21 | INFO  | Task 692e7569-a049-4eac-87db-6bc1b90b3008 is in state STARTED 2026-04-02 00:58:21.428604 | orchestrator | 2026-04-02 00:58:21 | INFO  | Task 07dec8a4-4ad2-49c6-97d3-225141541951 is in state STARTED 2026-04-02 00:58:21.429731 | orchestrator | 2026-04-02 00:58:21 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:58:24.459865 | orchestrator | 2026-04-02 00:58:24 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 00:58:24.460211 | orchestrator | 2026-04-02 00:58:24 | INFO  | Task cb6e6411-9343-4d6b-9f4f-5d2f3c897e8d is in state STARTED 2026-04-02 00:58:24.471215 | orchestrator | 2026-04-02 00:58:24 | INFO  | Task 8a268617-d281-42ee-af83-7f8954f88933 is in state STARTED 2026-04-02 00:58:24.473393 | orchestrator | 2026-04-02 00:58:24.473433 | orchestrator | 2026-04-02 00:58:24.473439 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-02 00:58:24.473443 | orchestrator | 2026-04-02 00:58:24.473447 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-02 00:58:24.473451 | orchestrator | Thursday 02 April 2026 00:57:36 +0000 (0:00:00.386) 0:00:00.386 ******** 2026-04-02 00:58:24.473455 | orchestrator | ok: [testbed-node-0] 2026-04-02 00:58:24.473460 | orchestrator | ok: [testbed-node-1] 2026-04-02 00:58:24.473463 | orchestrator | ok: [testbed-node-2] 2026-04-02 00:58:24.473467 | orchestrator | ok: [testbed-node-3] 2026-04-02 00:58:24.473471 | orchestrator | ok: [testbed-node-4] 2026-04-02 00:58:24.473475 | orchestrator | ok: [testbed-node-5] 2026-04-02 00:58:24.473479 | orchestrator | ok: [testbed-manager] 2026-04-02 00:58:24.473482 | orchestrator | 2026-04-02 00:58:24.473486 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-02 00:58:24.473490 | orchestrator | Thursday 02 April 2026 00:57:37 +0000 (0:00:00.818) 0:00:01.205 ******** 2026-04-02 00:58:24.473494 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2026-04-02 00:58:24.473498 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2026-04-02 00:58:24.473502 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2026-04-02 00:58:24.473505 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2026-04-02 00:58:24.473509 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2026-04-02 00:58:24.473513 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2026-04-02 00:58:24.473517 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2026-04-02 00:58:24.473520 | orchestrator | 2026-04-02 00:58:24.473524 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-04-02 00:58:24.473528 | orchestrator | 2026-04-02 00:58:24.473532 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2026-04-02 00:58:24.473536 | orchestrator | Thursday 02 April 2026 00:57:38 +0000 (0:00:00.942) 0:00:02.148 ******** 2026-04-02 00:58:24.473540 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-04-02 00:58:24.473545 | orchestrator | 2026-04-02 00:58:24.473549 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2026-04-02 00:58:24.473552 | orchestrator | Thursday 02 April 2026 00:57:40 +0000 (0:00:02.230) 0:00:04.379 ******** 2026-04-02 00:58:24.473556 | orchestrator | changed: [testbed-node-0] => (item=swift (object-store)) 2026-04-02 00:58:24.473565 | orchestrator | 2026-04-02 00:58:24.473569 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2026-04-02 00:58:24.473590 | orchestrator | Thursday 02 April 2026 00:57:47 +0000 (0:00:06.771) 0:00:11.150 ******** 2026-04-02 00:58:24.473595 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2026-04-02 00:58:24.473599 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2026-04-02 00:58:24.473603 | orchestrator | 2026-04-02 00:58:24.473607 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2026-04-02 00:58:24.473611 | orchestrator | Thursday 02 April 2026 00:57:55 +0000 (0:00:08.215) 0:00:19.365 ******** 2026-04-02 00:58:24.473614 | orchestrator | changed: [testbed-node-0] => (item=service) 2026-04-02 00:58:24.473618 | orchestrator | 2026-04-02 00:58:24.473622 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2026-04-02 00:58:24.473626 | orchestrator | Thursday 02 April 2026 00:57:59 +0000 (0:00:04.066) 0:00:23.432 ******** 2026-04-02 00:58:24.473630 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service) 2026-04-02 00:58:24.473640 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-02 00:58:24.473644 | orchestrator | 2026-04-02 00:58:24.473648 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2026-04-02 00:58:24.473652 | orchestrator | Thursday 02 April 2026 00:58:05 +0000 (0:00:05.230) 0:00:28.662 ******** 2026-04-02 00:58:24.473655 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-02 00:58:24.473659 | orchestrator | changed: [testbed-node-0] => (item=ResellerAdmin) 2026-04-02 00:58:24.473663 | orchestrator | 2026-04-02 00:58:24.473667 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2026-04-02 00:58:24.473670 | orchestrator | Thursday 02 April 2026 00:58:12 +0000 (0:00:07.126) 0:00:35.789 ******** 2026-04-02 00:58:24.473674 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service -> admin) 2026-04-02 00:58:24.473678 | orchestrator | 2026-04-02 00:58:24.473681 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-02 00:58:24.473685 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-02 00:58:24.473690 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-02 00:58:24.473704 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-02 00:58:24.473708 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-02 00:58:24.473712 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-02 00:58:24.473725 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-02 00:58:24.473729 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-02 00:58:24.473733 | orchestrator | 2026-04-02 00:58:24.473737 | orchestrator | 2026-04-02 00:58:24.473740 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-02 00:58:24.473744 | orchestrator | Thursday 02 April 2026 00:58:17 +0000 (0:00:05.441) 0:00:41.231 ******** 2026-04-02 00:58:24.473748 | orchestrator | =============================================================================== 2026-04-02 00:58:24.473752 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 8.22s 2026-04-02 00:58:24.473756 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 7.13s 2026-04-02 00:58:24.473759 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 6.77s 2026-04-02 00:58:24.473763 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 5.44s 2026-04-02 00:58:24.473771 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 5.23s 2026-04-02 00:58:24.473775 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 4.07s 2026-04-02 00:58:24.473778 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 2.23s 2026-04-02 00:58:24.473783 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.94s 2026-04-02 00:58:24.473786 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.82s 2026-04-02 00:58:24.473790 | orchestrator | 2026-04-02 00:58:24.473794 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-02 00:58:24.473798 | orchestrator | 2.16.14 2026-04-02 00:58:24.473802 | orchestrator | 2026-04-02 00:58:24.473806 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-04-02 00:58:24.473811 | orchestrator | 2026-04-02 00:58:24.473817 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-04-02 00:58:24.473827 | orchestrator | Thursday 02 April 2026 00:57:31 +0000 (0:00:00.212) 0:00:00.212 ******** 2026-04-02 00:58:24.473834 | orchestrator | changed: [testbed-manager] 2026-04-02 00:58:24.473840 | orchestrator | 2026-04-02 00:58:24.473846 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-04-02 00:58:24.473853 | orchestrator | Thursday 02 April 2026 00:57:33 +0000 (0:00:02.226) 0:00:02.439 ******** 2026-04-02 00:58:24.473860 | orchestrator | changed: [testbed-manager] 2026-04-02 00:58:24.473867 | orchestrator | 2026-04-02 00:58:24.473874 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-04-02 00:58:24.473881 | orchestrator | Thursday 02 April 2026 00:57:34 +0000 (0:00:01.094) 0:00:03.533 ******** 2026-04-02 00:58:24.473887 | orchestrator | changed: [testbed-manager] 2026-04-02 00:58:24.473891 | orchestrator | 2026-04-02 00:58:24.473895 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-04-02 00:58:24.473899 | orchestrator | Thursday 02 April 2026 00:57:36 +0000 (0:00:01.842) 0:00:05.376 ******** 2026-04-02 00:58:24.473902 | orchestrator | changed: [testbed-manager] 2026-04-02 00:58:24.473906 | orchestrator | 2026-04-02 00:58:24.473916 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-04-02 00:58:24.473920 | orchestrator | Thursday 02 April 2026 00:57:37 +0000 (0:00:01.212) 0:00:06.588 ******** 2026-04-02 00:58:24.473928 | orchestrator | changed: [testbed-manager] 2026-04-02 00:58:24.473932 | orchestrator | 2026-04-02 00:58:24.473936 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-04-02 00:58:24.473940 | orchestrator | Thursday 02 April 2026 00:57:38 +0000 (0:00:01.033) 0:00:07.622 ******** 2026-04-02 00:58:24.473943 | orchestrator | changed: [testbed-manager] 2026-04-02 00:58:24.473947 | orchestrator | 2026-04-02 00:58:24.473951 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-04-02 00:58:24.473958 | orchestrator | Thursday 02 April 2026 00:57:39 +0000 (0:00:01.286) 0:00:08.908 ******** 2026-04-02 00:58:24.473962 | orchestrator | changed: [testbed-manager] 2026-04-02 00:58:24.473965 | orchestrator | 2026-04-02 00:58:24.473969 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-04-02 00:58:24.473998 | orchestrator | Thursday 02 April 2026 00:57:41 +0000 (0:00:02.074) 0:00:10.983 ******** 2026-04-02 00:58:24.474004 | orchestrator | changed: [testbed-manager] 2026-04-02 00:58:24.474008 | orchestrator | 2026-04-02 00:58:24.474039 | orchestrator | TASK [Create admin user] ******************************************************* 2026-04-02 00:58:24.474044 | orchestrator | Thursday 02 April 2026 00:57:43 +0000 (0:00:01.188) 0:00:12.171 ******** 2026-04-02 00:58:24.474048 | orchestrator | changed: [testbed-manager] 2026-04-02 00:58:24.474053 | orchestrator | 2026-04-02 00:58:24.474057 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-04-02 00:58:24.474062 | orchestrator | Thursday 02 April 2026 00:57:56 +0000 (0:00:12.848) 0:00:25.020 ******** 2026-04-02 00:58:24.474066 | orchestrator | skipping: [testbed-manager] 2026-04-02 00:58:24.474074 | orchestrator | 2026-04-02 00:58:24.474079 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-04-02 00:58:24.474083 | orchestrator | 2026-04-02 00:58:24.474088 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-04-02 00:58:24.474092 | orchestrator | Thursday 02 April 2026 00:57:56 +0000 (0:00:00.148) 0:00:25.169 ******** 2026-04-02 00:58:24.474145 | orchestrator | changed: [testbed-node-0] 2026-04-02 00:58:24.474150 | orchestrator | 2026-04-02 00:58:24.474188 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-04-02 00:58:24.474192 | orchestrator | 2026-04-02 00:58:24.474197 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-04-02 00:58:24.474201 | orchestrator | Thursday 02 April 2026 00:57:58 +0000 (0:00:01.968) 0:00:27.137 ******** 2026-04-02 00:58:24.474204 | orchestrator | changed: [testbed-node-1] 2026-04-02 00:58:24.474208 | orchestrator | 2026-04-02 00:58:24.474212 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-04-02 00:58:24.474216 | orchestrator | 2026-04-02 00:58:24.474220 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-04-02 00:58:24.474228 | orchestrator | Thursday 02 April 2026 00:58:09 +0000 (0:00:11.640) 0:00:38.778 ******** 2026-04-02 00:58:24.474234 | orchestrator | changed: [testbed-node-2] 2026-04-02 00:58:24.474241 | orchestrator | 2026-04-02 00:58:24.474245 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-02 00:58:24.474249 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-02 00:58:24.474253 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-02 00:58:24.474257 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-02 00:58:24.474261 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-02 00:58:24.474264 | orchestrator | 2026-04-02 00:58:24.474268 | orchestrator | 2026-04-02 00:58:24.474272 | orchestrator | 2026-04-02 00:58:24.474275 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-02 00:58:24.474279 | orchestrator | Thursday 02 April 2026 00:58:21 +0000 (0:00:11.465) 0:00:50.243 ******** 2026-04-02 00:58:24.474283 | orchestrator | =============================================================================== 2026-04-02 00:58:24.474287 | orchestrator | Restart ceph manager service ------------------------------------------- 25.07s 2026-04-02 00:58:24.474291 | orchestrator | Create admin user ------------------------------------------------------ 12.85s 2026-04-02 00:58:24.474294 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 2.23s 2026-04-02 00:58:24.474298 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.07s 2026-04-02 00:58:24.474302 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.84s 2026-04-02 00:58:24.474305 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.29s 2026-04-02 00:58:24.474309 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.21s 2026-04-02 00:58:24.474313 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.19s 2026-04-02 00:58:24.474317 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.09s 2026-04-02 00:58:24.474320 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.03s 2026-04-02 00:58:24.474324 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.15s 2026-04-02 00:58:24.474328 | orchestrator | 2026-04-02 00:58:24 | INFO  | Task 692e7569-a049-4eac-87db-6bc1b90b3008 is in state SUCCESS 2026-04-02 00:58:24.474332 | orchestrator | 2026-04-02 00:58:24 | INFO  | Task 07dec8a4-4ad2-49c6-97d3-225141541951 is in state STARTED 2026-04-02 00:58:24.474339 | orchestrator | 2026-04-02 00:58:24 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:58:27.509408 | orchestrator | 2026-04-02 00:58:27 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 00:58:27.509886 | orchestrator | 2026-04-02 00:58:27 | INFO  | Task cb6e6411-9343-4d6b-9f4f-5d2f3c897e8d is in state STARTED 2026-04-02 00:58:27.511646 | orchestrator | 2026-04-02 00:58:27 | INFO  | Task 8a268617-d281-42ee-af83-7f8954f88933 is in state STARTED 2026-04-02 00:58:27.512450 | orchestrator | 2026-04-02 00:58:27 | INFO  | Task 07dec8a4-4ad2-49c6-97d3-225141541951 is in state STARTED 2026-04-02 00:58:27.512473 | orchestrator | 2026-04-02 00:58:27 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:58:30.539402 | orchestrator | 2026-04-02 00:58:30 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 00:58:30.540119 | orchestrator | 2026-04-02 00:58:30 | INFO  | Task cb6e6411-9343-4d6b-9f4f-5d2f3c897e8d is in state STARTED 2026-04-02 00:58:30.540803 | orchestrator | 2026-04-02 00:58:30 | INFO  | Task 8a268617-d281-42ee-af83-7f8954f88933 is in state STARTED 2026-04-02 00:58:30.541754 | orchestrator | 2026-04-02 00:58:30 | INFO  | Task 07dec8a4-4ad2-49c6-97d3-225141541951 is in state STARTED 2026-04-02 00:58:30.541786 | orchestrator | 2026-04-02 00:58:30 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:58:33.570067 | orchestrator | 2026-04-02 00:58:33 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 00:58:33.572586 | orchestrator | 2026-04-02 00:58:33 | INFO  | Task cb6e6411-9343-4d6b-9f4f-5d2f3c897e8d is in state STARTED 2026-04-02 00:58:33.573273 | orchestrator | 2026-04-02 00:58:33 | INFO  | Task 8a268617-d281-42ee-af83-7f8954f88933 is in state STARTED 2026-04-02 00:58:33.574864 | orchestrator | 2026-04-02 00:58:33 | INFO  | Task 07dec8a4-4ad2-49c6-97d3-225141541951 is in state STARTED 2026-04-02 00:58:33.574900 | orchestrator | 2026-04-02 00:58:33 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:58:36.617229 | orchestrator | 2026-04-02 00:58:36 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 00:58:36.619192 | orchestrator | 2026-04-02 00:58:36 | INFO  | Task cb6e6411-9343-4d6b-9f4f-5d2f3c897e8d is in state STARTED 2026-04-02 00:58:36.619507 | orchestrator | 2026-04-02 00:58:36 | INFO  | Task 8a268617-d281-42ee-af83-7f8954f88933 is in state STARTED 2026-04-02 00:58:36.620282 | orchestrator | 2026-04-02 00:58:36 | INFO  | Task 07dec8a4-4ad2-49c6-97d3-225141541951 is in state STARTED 2026-04-02 00:58:36.620432 | orchestrator | 2026-04-02 00:58:36 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:58:39.651280 | orchestrator | 2026-04-02 00:58:39 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 00:58:39.652859 | orchestrator | 2026-04-02 00:58:39 | INFO  | Task cb6e6411-9343-4d6b-9f4f-5d2f3c897e8d is in state STARTED 2026-04-02 00:58:39.653490 | orchestrator | 2026-04-02 00:58:39 | INFO  | Task 8a268617-d281-42ee-af83-7f8954f88933 is in state STARTED 2026-04-02 00:58:39.654494 | orchestrator | 2026-04-02 00:58:39 | INFO  | Task 07dec8a4-4ad2-49c6-97d3-225141541951 is in state STARTED 2026-04-02 00:58:39.655478 | orchestrator | 2026-04-02 00:58:39 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:58:42.708118 | orchestrator | 2026-04-02 00:58:42 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 00:58:42.709045 | orchestrator | 2026-04-02 00:58:42 | INFO  | Task cb6e6411-9343-4d6b-9f4f-5d2f3c897e8d is in state STARTED 2026-04-02 00:58:42.710084 | orchestrator | 2026-04-02 00:58:42 | INFO  | Task 8a268617-d281-42ee-af83-7f8954f88933 is in state STARTED 2026-04-02 00:58:42.710873 | orchestrator | 2026-04-02 00:58:42 | INFO  | Task 07dec8a4-4ad2-49c6-97d3-225141541951 is in state STARTED 2026-04-02 00:58:42.711032 | orchestrator | 2026-04-02 00:58:42 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:58:45.740408 | orchestrator | 2026-04-02 00:58:45 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 00:58:45.741037 | orchestrator | 2026-04-02 00:58:45 | INFO  | Task cb6e6411-9343-4d6b-9f4f-5d2f3c897e8d is in state STARTED 2026-04-02 00:58:45.741766 | orchestrator | 2026-04-02 00:58:45 | INFO  | Task 8a268617-d281-42ee-af83-7f8954f88933 is in state STARTED 2026-04-02 00:58:45.742490 | orchestrator | 2026-04-02 00:58:45 | INFO  | Task 07dec8a4-4ad2-49c6-97d3-225141541951 is in state STARTED 2026-04-02 00:58:45.742579 | orchestrator | 2026-04-02 00:58:45 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:58:48.779661 | orchestrator | 2026-04-02 00:58:48 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 00:58:48.781497 | orchestrator | 2026-04-02 00:58:48 | INFO  | Task cb6e6411-9343-4d6b-9f4f-5d2f3c897e8d is in state STARTED 2026-04-02 00:58:48.783344 | orchestrator | 2026-04-02 00:58:48 | INFO  | Task 8a268617-d281-42ee-af83-7f8954f88933 is in state STARTED 2026-04-02 00:58:48.784866 | orchestrator | 2026-04-02 00:58:48 | INFO  | Task 07dec8a4-4ad2-49c6-97d3-225141541951 is in state STARTED 2026-04-02 00:58:48.785236 | orchestrator | 2026-04-02 00:58:48 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:58:51.816678 | orchestrator | 2026-04-02 00:58:51 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 00:58:51.817518 | orchestrator | 2026-04-02 00:58:51 | INFO  | Task cb6e6411-9343-4d6b-9f4f-5d2f3c897e8d is in state STARTED 2026-04-02 00:58:51.819297 | orchestrator | 2026-04-02 00:58:51 | INFO  | Task 8a268617-d281-42ee-af83-7f8954f88933 is in state STARTED 2026-04-02 00:58:51.820100 | orchestrator | 2026-04-02 00:58:51 | INFO  | Task 07dec8a4-4ad2-49c6-97d3-225141541951 is in state STARTED 2026-04-02 00:58:51.820136 | orchestrator | 2026-04-02 00:58:51 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:58:54.853275 | orchestrator | 2026-04-02 00:58:54 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 00:58:54.854623 | orchestrator | 2026-04-02 00:58:54 | INFO  | Task cb6e6411-9343-4d6b-9f4f-5d2f3c897e8d is in state STARTED 2026-04-02 00:58:54.855202 | orchestrator | 2026-04-02 00:58:54 | INFO  | Task 8a268617-d281-42ee-af83-7f8954f88933 is in state STARTED 2026-04-02 00:58:54.855808 | orchestrator | 2026-04-02 00:58:54 | INFO  | Task 07dec8a4-4ad2-49c6-97d3-225141541951 is in state STARTED 2026-04-02 00:58:54.855893 | orchestrator | 2026-04-02 00:58:54 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:58:57.878288 | orchestrator | 2026-04-02 00:58:57 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 00:58:57.880225 | orchestrator | 2026-04-02 00:58:57 | INFO  | Task cb6e6411-9343-4d6b-9f4f-5d2f3c897e8d is in state STARTED 2026-04-02 00:58:57.880304 | orchestrator | 2026-04-02 00:58:57 | INFO  | Task 8a268617-d281-42ee-af83-7f8954f88933 is in state STARTED 2026-04-02 00:58:57.880312 | orchestrator | 2026-04-02 00:58:57 | INFO  | Task 07dec8a4-4ad2-49c6-97d3-225141541951 is in state STARTED 2026-04-02 00:58:57.880321 | orchestrator | 2026-04-02 00:58:57 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:59:00.915238 | orchestrator | 2026-04-02 00:59:00 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 00:59:00.915750 | orchestrator | 2026-04-02 00:59:00 | INFO  | Task cb6e6411-9343-4d6b-9f4f-5d2f3c897e8d is in state STARTED 2026-04-02 00:59:00.917371 | orchestrator | 2026-04-02 00:59:00 | INFO  | Task 8a268617-d281-42ee-af83-7f8954f88933 is in state STARTED 2026-04-02 00:59:00.917925 | orchestrator | 2026-04-02 00:59:00 | INFO  | Task 07dec8a4-4ad2-49c6-97d3-225141541951 is in state STARTED 2026-04-02 00:59:00.918101 | orchestrator | 2026-04-02 00:59:00 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:59:03.955369 | orchestrator | 2026-04-02 00:59:03 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 00:59:03.957030 | orchestrator | 2026-04-02 00:59:03 | INFO  | Task cb6e6411-9343-4d6b-9f4f-5d2f3c897e8d is in state STARTED 2026-04-02 00:59:03.959156 | orchestrator | 2026-04-02 00:59:03 | INFO  | Task 8a268617-d281-42ee-af83-7f8954f88933 is in state STARTED 2026-04-02 00:59:03.960766 | orchestrator | 2026-04-02 00:59:03 | INFO  | Task 07dec8a4-4ad2-49c6-97d3-225141541951 is in state STARTED 2026-04-02 00:59:03.960982 | orchestrator | 2026-04-02 00:59:03 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:59:06.989656 | orchestrator | 2026-04-02 00:59:06 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 00:59:06.990622 | orchestrator | 2026-04-02 00:59:06 | INFO  | Task cb6e6411-9343-4d6b-9f4f-5d2f3c897e8d is in state STARTED 2026-04-02 00:59:06.991927 | orchestrator | 2026-04-02 00:59:06 | INFO  | Task 8a268617-d281-42ee-af83-7f8954f88933 is in state STARTED 2026-04-02 00:59:06.993303 | orchestrator | 2026-04-02 00:59:06 | INFO  | Task 07dec8a4-4ad2-49c6-97d3-225141541951 is in state STARTED 2026-04-02 00:59:06.993658 | orchestrator | 2026-04-02 00:59:06 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:59:10.029443 | orchestrator | 2026-04-02 00:59:10 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 00:59:10.029576 | orchestrator | 2026-04-02 00:59:10 | INFO  | Task cb6e6411-9343-4d6b-9f4f-5d2f3c897e8d is in state STARTED 2026-04-02 00:59:10.030585 | orchestrator | 2026-04-02 00:59:10 | INFO  | Task 8a268617-d281-42ee-af83-7f8954f88933 is in state STARTED 2026-04-02 00:59:10.031322 | orchestrator | 2026-04-02 00:59:10 | INFO  | Task 07dec8a4-4ad2-49c6-97d3-225141541951 is in state STARTED 2026-04-02 00:59:10.031382 | orchestrator | 2026-04-02 00:59:10 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:59:13.123662 | orchestrator | 2026-04-02 00:59:13 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 00:59:13.123724 | orchestrator | 2026-04-02 00:59:13 | INFO  | Task cb6e6411-9343-4d6b-9f4f-5d2f3c897e8d is in state STARTED 2026-04-02 00:59:13.123738 | orchestrator | 2026-04-02 00:59:13 | INFO  | Task 8a268617-d281-42ee-af83-7f8954f88933 is in state STARTED 2026-04-02 00:59:13.123745 | orchestrator | 2026-04-02 00:59:13 | INFO  | Task 07dec8a4-4ad2-49c6-97d3-225141541951 is in state STARTED 2026-04-02 00:59:13.123751 | orchestrator | 2026-04-02 00:59:13 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:59:16.169580 | orchestrator | 2026-04-02 00:59:16 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 00:59:16.169865 | orchestrator | 2026-04-02 00:59:16 | INFO  | Task cb6e6411-9343-4d6b-9f4f-5d2f3c897e8d is in state STARTED 2026-04-02 00:59:16.170678 | orchestrator | 2026-04-02 00:59:16 | INFO  | Task 8a268617-d281-42ee-af83-7f8954f88933 is in state STARTED 2026-04-02 00:59:16.171279 | orchestrator | 2026-04-02 00:59:16 | INFO  | Task 07dec8a4-4ad2-49c6-97d3-225141541951 is in state STARTED 2026-04-02 00:59:16.171324 | orchestrator | 2026-04-02 00:59:16 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:59:19.203102 | orchestrator | 2026-04-02 00:59:19 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 00:59:19.203292 | orchestrator | 2026-04-02 00:59:19 | INFO  | Task cb6e6411-9343-4d6b-9f4f-5d2f3c897e8d is in state STARTED 2026-04-02 00:59:19.203830 | orchestrator | 2026-04-02 00:59:19 | INFO  | Task 8a268617-d281-42ee-af83-7f8954f88933 is in state STARTED 2026-04-02 00:59:19.204407 | orchestrator | 2026-04-02 00:59:19 | INFO  | Task 07dec8a4-4ad2-49c6-97d3-225141541951 is in state STARTED 2026-04-02 00:59:19.204434 | orchestrator | 2026-04-02 00:59:19 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:59:22.240700 | orchestrator | 2026-04-02 00:59:22 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 00:59:22.240761 | orchestrator | 2026-04-02 00:59:22 | INFO  | Task cb6e6411-9343-4d6b-9f4f-5d2f3c897e8d is in state STARTED 2026-04-02 00:59:22.241435 | orchestrator | 2026-04-02 00:59:22 | INFO  | Task 8a268617-d281-42ee-af83-7f8954f88933 is in state STARTED 2026-04-02 00:59:22.242039 | orchestrator | 2026-04-02 00:59:22 | INFO  | Task 07dec8a4-4ad2-49c6-97d3-225141541951 is in state STARTED 2026-04-02 00:59:22.242061 | orchestrator | 2026-04-02 00:59:22 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:59:25.271320 | orchestrator | 2026-04-02 00:59:25 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 00:59:25.271726 | orchestrator | 2026-04-02 00:59:25 | INFO  | Task cb6e6411-9343-4d6b-9f4f-5d2f3c897e8d is in state STARTED 2026-04-02 00:59:25.272500 | orchestrator | 2026-04-02 00:59:25 | INFO  | Task 8a268617-d281-42ee-af83-7f8954f88933 is in state STARTED 2026-04-02 00:59:25.273261 | orchestrator | 2026-04-02 00:59:25 | INFO  | Task 07dec8a4-4ad2-49c6-97d3-225141541951 is in state STARTED 2026-04-02 00:59:25.273281 | orchestrator | 2026-04-02 00:59:25 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:59:28.302120 | orchestrator | 2026-04-02 00:59:28 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 00:59:28.302342 | orchestrator | 2026-04-02 00:59:28 | INFO  | Task cb6e6411-9343-4d6b-9f4f-5d2f3c897e8d is in state STARTED 2026-04-02 00:59:28.303733 | orchestrator | 2026-04-02 00:59:28 | INFO  | Task 8a268617-d281-42ee-af83-7f8954f88933 is in state STARTED 2026-04-02 00:59:28.304367 | orchestrator | 2026-04-02 00:59:28 | INFO  | Task 07dec8a4-4ad2-49c6-97d3-225141541951 is in state STARTED 2026-04-02 00:59:28.304396 | orchestrator | 2026-04-02 00:59:28 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:59:31.327230 | orchestrator | 2026-04-02 00:59:31 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 00:59:31.327503 | orchestrator | 2026-04-02 00:59:31 | INFO  | Task cb6e6411-9343-4d6b-9f4f-5d2f3c897e8d is in state STARTED 2026-04-02 00:59:31.328078 | orchestrator | 2026-04-02 00:59:31 | INFO  | Task 8a268617-d281-42ee-af83-7f8954f88933 is in state STARTED 2026-04-02 00:59:31.328544 | orchestrator | 2026-04-02 00:59:31 | INFO  | Task 07dec8a4-4ad2-49c6-97d3-225141541951 is in state STARTED 2026-04-02 00:59:31.328585 | orchestrator | 2026-04-02 00:59:31 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:59:34.364093 | orchestrator | 2026-04-02 00:59:34 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 00:59:34.364865 | orchestrator | 2026-04-02 00:59:34 | INFO  | Task cb6e6411-9343-4d6b-9f4f-5d2f3c897e8d is in state STARTED 2026-04-02 00:59:34.365958 | orchestrator | 2026-04-02 00:59:34 | INFO  | Task 8a268617-d281-42ee-af83-7f8954f88933 is in state STARTED 2026-04-02 00:59:34.370527 | orchestrator | 2026-04-02 00:59:34 | INFO  | Task 07dec8a4-4ad2-49c6-97d3-225141541951 is in state STARTED 2026-04-02 00:59:34.370580 | orchestrator | 2026-04-02 00:59:34 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:59:37.388089 | orchestrator | 2026-04-02 00:59:37 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 00:59:37.388539 | orchestrator | 2026-04-02 00:59:37 | INFO  | Task cb6e6411-9343-4d6b-9f4f-5d2f3c897e8d is in state STARTED 2026-04-02 00:59:37.390143 | orchestrator | 2026-04-02 00:59:37 | INFO  | Task 8a268617-d281-42ee-af83-7f8954f88933 is in state STARTED 2026-04-02 00:59:37.391388 | orchestrator | 2026-04-02 00:59:37 | INFO  | Task 07dec8a4-4ad2-49c6-97d3-225141541951 is in state STARTED 2026-04-02 00:59:37.391432 | orchestrator | 2026-04-02 00:59:37 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:59:40.428332 | orchestrator | 2026-04-02 00:59:40 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 00:59:40.430364 | orchestrator | 2026-04-02 00:59:40 | INFO  | Task cb6e6411-9343-4d6b-9f4f-5d2f3c897e8d is in state STARTED 2026-04-02 00:59:40.432780 | orchestrator | 2026-04-02 00:59:40 | INFO  | Task 8a268617-d281-42ee-af83-7f8954f88933 is in state STARTED 2026-04-02 00:59:40.436549 | orchestrator | 2026-04-02 00:59:40 | INFO  | Task 07dec8a4-4ad2-49c6-97d3-225141541951 is in state STARTED 2026-04-02 00:59:40.436636 | orchestrator | 2026-04-02 00:59:40 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:59:43.477096 | orchestrator | 2026-04-02 00:59:43 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 00:59:43.478715 | orchestrator | 2026-04-02 00:59:43 | INFO  | Task cb6e6411-9343-4d6b-9f4f-5d2f3c897e8d is in state STARTED 2026-04-02 00:59:43.480288 | orchestrator | 2026-04-02 00:59:43 | INFO  | Task 8a268617-d281-42ee-af83-7f8954f88933 is in state STARTED 2026-04-02 00:59:43.481622 | orchestrator | 2026-04-02 00:59:43 | INFO  | Task 07dec8a4-4ad2-49c6-97d3-225141541951 is in state STARTED 2026-04-02 00:59:43.481657 | orchestrator | 2026-04-02 00:59:43 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:59:46.516124 | orchestrator | 2026-04-02 00:59:46 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 00:59:46.516662 | orchestrator | 2026-04-02 00:59:46 | INFO  | Task cb6e6411-9343-4d6b-9f4f-5d2f3c897e8d is in state STARTED 2026-04-02 00:59:46.517448 | orchestrator | 2026-04-02 00:59:46 | INFO  | Task 8a268617-d281-42ee-af83-7f8954f88933 is in state STARTED 2026-04-02 00:59:46.518352 | orchestrator | 2026-04-02 00:59:46 | INFO  | Task 07dec8a4-4ad2-49c6-97d3-225141541951 is in state STARTED 2026-04-02 00:59:46.518448 | orchestrator | 2026-04-02 00:59:46 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:59:49.554934 | orchestrator | 2026-04-02 00:59:49 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 00:59:49.556660 | orchestrator | 2026-04-02 00:59:49 | INFO  | Task cb6e6411-9343-4d6b-9f4f-5d2f3c897e8d is in state STARTED 2026-04-02 00:59:49.558486 | orchestrator | 2026-04-02 00:59:49 | INFO  | Task 8a268617-d281-42ee-af83-7f8954f88933 is in state STARTED 2026-04-02 00:59:49.560124 | orchestrator | 2026-04-02 00:59:49 | INFO  | Task 07dec8a4-4ad2-49c6-97d3-225141541951 is in state STARTED 2026-04-02 00:59:49.560176 | orchestrator | 2026-04-02 00:59:49 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:59:52.599872 | orchestrator | 2026-04-02 00:59:52 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 00:59:52.601365 | orchestrator | 2026-04-02 00:59:52 | INFO  | Task cb6e6411-9343-4d6b-9f4f-5d2f3c897e8d is in state STARTED 2026-04-02 00:59:52.604872 | orchestrator | 2026-04-02 00:59:52 | INFO  | Task 8a268617-d281-42ee-af83-7f8954f88933 is in state STARTED 2026-04-02 00:59:52.607646 | orchestrator | 2026-04-02 00:59:52 | INFO  | Task 07dec8a4-4ad2-49c6-97d3-225141541951 is in state STARTED 2026-04-02 00:59:52.607694 | orchestrator | 2026-04-02 00:59:52 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:59:55.655024 | orchestrator | 2026-04-02 00:59:55 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 00:59:55.657839 | orchestrator | 2026-04-02 00:59:55 | INFO  | Task cb6e6411-9343-4d6b-9f4f-5d2f3c897e8d is in state STARTED 2026-04-02 00:59:55.660943 | orchestrator | 2026-04-02 00:59:55 | INFO  | Task 8a268617-d281-42ee-af83-7f8954f88933 is in state STARTED 2026-04-02 00:59:55.663850 | orchestrator | 2026-04-02 00:59:55 | INFO  | Task 07dec8a4-4ad2-49c6-97d3-225141541951 is in state STARTED 2026-04-02 00:59:55.663942 | orchestrator | 2026-04-02 00:59:55 | INFO  | Wait 1 second(s) until the next check 2026-04-02 00:59:58.718501 | orchestrator | 2026-04-02 00:59:58 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 00:59:58.720623 | orchestrator | 2026-04-02 00:59:58 | INFO  | Task cb6e6411-9343-4d6b-9f4f-5d2f3c897e8d is in state STARTED 2026-04-02 00:59:58.722554 | orchestrator | 2026-04-02 00:59:58 | INFO  | Task 8a268617-d281-42ee-af83-7f8954f88933 is in state STARTED 2026-04-02 00:59:58.725476 | orchestrator | 2026-04-02 00:59:58 | INFO  | Task 07dec8a4-4ad2-49c6-97d3-225141541951 is in state STARTED 2026-04-02 00:59:58.725550 | orchestrator | 2026-04-02 00:59:58 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:00:01.766532 | orchestrator | 2026-04-02 01:00:01 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:00:01.768604 | orchestrator | 2026-04-02 01:00:01 | INFO  | Task cb6e6411-9343-4d6b-9f4f-5d2f3c897e8d is in state STARTED 2026-04-02 01:00:01.768659 | orchestrator | 2026-04-02 01:00:01 | INFO  | Task 8a268617-d281-42ee-af83-7f8954f88933 is in state STARTED 2026-04-02 01:00:01.770966 | orchestrator | 2026-04-02 01:00:01 | INFO  | Task 07dec8a4-4ad2-49c6-97d3-225141541951 is in state STARTED 2026-04-02 01:00:01.771017 | orchestrator | 2026-04-02 01:00:01 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:00:04.810544 | orchestrator | 2026-04-02 01:00:04 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:00:04.813514 | orchestrator | 2026-04-02 01:00:04 | INFO  | Task cb6e6411-9343-4d6b-9f4f-5d2f3c897e8d is in state STARTED 2026-04-02 01:00:04.814308 | orchestrator | 2026-04-02 01:00:04 | INFO  | Task 8a268617-d281-42ee-af83-7f8954f88933 is in state STARTED 2026-04-02 01:00:04.815740 | orchestrator | 2026-04-02 01:00:04 | INFO  | Task 07dec8a4-4ad2-49c6-97d3-225141541951 is in state STARTED 2026-04-02 01:00:04.815820 | orchestrator | 2026-04-02 01:00:04 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:00:07.864700 | orchestrator | 2026-04-02 01:00:07 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:00:07.866416 | orchestrator | 2026-04-02 01:00:07 | INFO  | Task cb6e6411-9343-4d6b-9f4f-5d2f3c897e8d is in state STARTED 2026-04-02 01:00:07.868241 | orchestrator | 2026-04-02 01:00:07 | INFO  | Task 8a268617-d281-42ee-af83-7f8954f88933 is in state STARTED 2026-04-02 01:00:07.870267 | orchestrator | 2026-04-02 01:00:07 | INFO  | Task 07dec8a4-4ad2-49c6-97d3-225141541951 is in state STARTED 2026-04-02 01:00:07.870339 | orchestrator | 2026-04-02 01:00:07 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:00:10.920624 | orchestrator | 2026-04-02 01:00:10 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:00:10.921859 | orchestrator | 2026-04-02 01:00:10 | INFO  | Task cb6e6411-9343-4d6b-9f4f-5d2f3c897e8d is in state STARTED 2026-04-02 01:00:10.924160 | orchestrator | 2026-04-02 01:00:10 | INFO  | Task 8a268617-d281-42ee-af83-7f8954f88933 is in state STARTED 2026-04-02 01:00:10.926196 | orchestrator | 2026-04-02 01:00:10 | INFO  | Task 07dec8a4-4ad2-49c6-97d3-225141541951 is in state STARTED 2026-04-02 01:00:10.927183 | orchestrator | 2026-04-02 01:00:10 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:00:13.970908 | orchestrator | 2026-04-02 01:00:13 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:00:13.971060 | orchestrator | 2026-04-02 01:00:13 | INFO  | Task cb6e6411-9343-4d6b-9f4f-5d2f3c897e8d is in state STARTED 2026-04-02 01:00:13.972518 | orchestrator | 2026-04-02 01:00:13 | INFO  | Task 8a268617-d281-42ee-af83-7f8954f88933 is in state STARTED 2026-04-02 01:00:13.972940 | orchestrator | 2026-04-02 01:00:13 | INFO  | Task 07dec8a4-4ad2-49c6-97d3-225141541951 is in state STARTED 2026-04-02 01:00:13.972965 | orchestrator | 2026-04-02 01:00:13 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:00:17.022663 | orchestrator | 2026-04-02 01:00:17 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:00:17.023365 | orchestrator | 2026-04-02 01:00:17 | INFO  | Task cb6e6411-9343-4d6b-9f4f-5d2f3c897e8d is in state STARTED 2026-04-02 01:00:17.024234 | orchestrator | 2026-04-02 01:00:17 | INFO  | Task 8a268617-d281-42ee-af83-7f8954f88933 is in state STARTED 2026-04-02 01:00:17.026813 | orchestrator | 2026-04-02 01:00:17 | INFO  | Task 07dec8a4-4ad2-49c6-97d3-225141541951 is in state STARTED 2026-04-02 01:00:17.026855 | orchestrator | 2026-04-02 01:00:17 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:00:20.070607 | orchestrator | 2026-04-02 01:00:20 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:00:20.071184 | orchestrator | 2026-04-02 01:00:20 | INFO  | Task cb6e6411-9343-4d6b-9f4f-5d2f3c897e8d is in state STARTED 2026-04-02 01:00:20.075753 | orchestrator | 2026-04-02 01:00:20 | INFO  | Task 8a268617-d281-42ee-af83-7f8954f88933 is in state STARTED 2026-04-02 01:00:20.077941 | orchestrator | 2026-04-02 01:00:20 | INFO  | Task 07dec8a4-4ad2-49c6-97d3-225141541951 is in state STARTED 2026-04-02 01:00:20.079586 | orchestrator | 2026-04-02 01:00:20 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:00:23.128307 | orchestrator | 2026-04-02 01:00:23 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:00:23.128437 | orchestrator | 2026-04-02 01:00:23 | INFO  | Task cb6e6411-9343-4d6b-9f4f-5d2f3c897e8d is in state STARTED 2026-04-02 01:00:23.129288 | orchestrator | 2026-04-02 01:00:23 | INFO  | Task 8a268617-d281-42ee-af83-7f8954f88933 is in state STARTED 2026-04-02 01:00:23.129837 | orchestrator | 2026-04-02 01:00:23 | INFO  | Task 07dec8a4-4ad2-49c6-97d3-225141541951 is in state STARTED 2026-04-02 01:00:23.129978 | orchestrator | 2026-04-02 01:00:23 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:00:26.152917 | orchestrator | 2026-04-02 01:00:26 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:00:26.153011 | orchestrator | 2026-04-02 01:00:26 | INFO  | Task cb6e6411-9343-4d6b-9f4f-5d2f3c897e8d is in state STARTED 2026-04-02 01:00:26.153485 | orchestrator | 2026-04-02 01:00:26 | INFO  | Task 8a268617-d281-42ee-af83-7f8954f88933 is in state STARTED 2026-04-02 01:00:26.154215 | orchestrator | 2026-04-02 01:00:26 | INFO  | Task 07dec8a4-4ad2-49c6-97d3-225141541951 is in state STARTED 2026-04-02 01:00:26.154246 | orchestrator | 2026-04-02 01:00:26 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:00:29.182441 | orchestrator | 2026-04-02 01:00:29 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:00:29.188615 | orchestrator | 2026-04-02 01:00:29 | INFO  | Task cb6e6411-9343-4d6b-9f4f-5d2f3c897e8d is in state SUCCESS 2026-04-02 01:00:29.189499 | orchestrator | 2026-04-02 01:00:29.189537 | orchestrator | 2026-04-02 01:00:29.189546 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-02 01:00:29.189554 | orchestrator | 2026-04-02 01:00:29.189562 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-02 01:00:29.189570 | orchestrator | Thursday 02 April 2026 00:57:30 +0000 (0:00:00.320) 0:00:00.320 ******** 2026-04-02 01:00:29.189577 | orchestrator | ok: [testbed-manager] 2026-04-02 01:00:29.189586 | orchestrator | ok: [testbed-node-0] 2026-04-02 01:00:29.189594 | orchestrator | ok: [testbed-node-1] 2026-04-02 01:00:29.189601 | orchestrator | ok: [testbed-node-2] 2026-04-02 01:00:29.189608 | orchestrator | ok: [testbed-node-3] 2026-04-02 01:00:29.189615 | orchestrator | ok: [testbed-node-4] 2026-04-02 01:00:29.189623 | orchestrator | ok: [testbed-node-5] 2026-04-02 01:00:29.189630 | orchestrator | 2026-04-02 01:00:29.189637 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-02 01:00:29.189644 | orchestrator | Thursday 02 April 2026 00:57:31 +0000 (0:00:00.730) 0:00:01.051 ******** 2026-04-02 01:00:29.189659 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-04-02 01:00:29.189667 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-04-02 01:00:29.189675 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-04-02 01:00:29.189682 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-04-02 01:00:29.189689 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-04-02 01:00:29.189696 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-04-02 01:00:29.189703 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-04-02 01:00:29.189710 | orchestrator | 2026-04-02 01:00:29.189717 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-04-02 01:00:29.189725 | orchestrator | 2026-04-02 01:00:29.189732 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-04-02 01:00:29.189739 | orchestrator | Thursday 02 April 2026 00:57:31 +0000 (0:00:00.749) 0:00:01.800 ******** 2026-04-02 01:00:29.189747 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-02 01:00:29.189755 | orchestrator | 2026-04-02 01:00:29.189763 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-04-02 01:00:29.189770 | orchestrator | Thursday 02 April 2026 00:57:32 +0000 (0:00:01.135) 0:00:02.935 ******** 2026-04-02 01:00:29.190223 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-02 01:00:29.190249 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-02 01:00:29.190346 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-02 01:00:29.190357 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 01:00:29.190377 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-02 01:00:29.190386 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 01:00:29.190394 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 01:00:29.190409 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 01:00:29.190417 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-02 01:00:29.190535 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-02 01:00:29.190544 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-02 01:00:29.190757 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 01:00:29.190785 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-02 01:00:29.190796 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-02 01:00:29.190804 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 01:00:29.190817 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-02 01:00:29.190931 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-02 01:00:29.190950 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-02 01:00:29.190958 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-02 01:00:29.191067 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-02 01:00:29.191229 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-02 01:00:29.191238 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 01:00:29.191260 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-02 01:00:29.191275 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-02 01:00:29.191283 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-02 01:00:29.191290 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 01:00:29.191296 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 01:00:29.191318 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 01:00:29.191327 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-02 01:00:29.191334 | orchestrator | 2026-04-02 01:00:29.191342 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-04-02 01:00:29.191350 | orchestrator | Thursday 02 April 2026 00:57:36 +0000 (0:00:04.063) 0:00:06.999 ******** 2026-04-02 01:00:29.191358 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-02 01:00:29.191366 | orchestrator | 2026-04-02 01:00:29.191373 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-04-02 01:00:29.191380 | orchestrator | Thursday 02 April 2026 00:57:38 +0000 (0:00:01.529) 0:00:08.528 ******** 2026-04-02 01:00:29.191398 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-02 01:00:29.191407 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-02 01:00:29.191414 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-02 01:00:29.191422 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-02 01:00:29.191455 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-02 01:00:29.191464 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-02 01:00:29.191472 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-02 01:00:29.191480 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-02 01:00:29.191496 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 01:00:29.191504 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-02 01:00:29.191576 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-02 01:00:29.191584 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 01:00:29.191607 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 01:00:29.191615 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-02 01:00:29.191623 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-02 01:00:29.191645 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 01:00:29.191653 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-02 01:00:29.191661 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 01:00:29.191668 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-02 01:00:29.191676 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-02 01:00:29.191700 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 01:00:29.191708 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-02 01:00:29.191729 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-02 01:00:29.191741 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-02 01:00:29.191748 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-02 01:00:29.191756 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 01:00:29.191764 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 01:00:29.191989 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 01:00:29.192002 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 01:00:29.192162 | orchestrator | 2026-04-02 01:00:29.192173 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-04-02 01:00:29.192181 | orchestrator | Thursday 02 April 2026 00:57:45 +0000 (0:00:06.488) 0:00:15.016 ******** 2026-04-02 01:00:29.192190 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-04-02 01:00:29.192204 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-02 01:00:29.192213 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-02 01:00:29.192223 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-04-02 01:00:29.192291 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-02 01:00:29.192301 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-02 01:00:29.192316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-02 01:00:29.192323 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-02 01:00:29.192367 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-02 01:00:29.192375 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-02 01:00:29.192383 | orchestrator | skipping: [testbed-manager] 2026-04-02 01:00:29.192391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-02 01:00:29.192399 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-02 01:00:29.192426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-02 01:00:29.192440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-02 01:00:29.192447 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-02 01:00:29.192455 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:00:29.192462 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:00:29.192474 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-02 01:00:29.192481 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-02 01:00:29.192489 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-02 01:00:29.192496 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-02 01:00:29.192504 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-02 01:00:29.192511 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:00:29.192542 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-02 01:00:29.192550 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-02 01:00:29.192558 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-02 01:00:29.192566 | orchestrator | skipping: [testbed-node-3] 2026-04-02 01:00:29.192577 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-02 01:00:29.192585 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-02 01:00:29.192600 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-02 01:00:29.192612 | orchestrator | skipping: [testbed-node-4] 2026-04-02 01:00:29.192620 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-02 01:00:29.192627 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-02 01:00:29.192660 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-02 01:00:29.192668 | orchestrator | skipping: [testbed-node-5] 2026-04-02 01:00:29.192676 | orchestrator | 2026-04-02 01:00:29.192683 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-04-02 01:00:29.192691 | orchestrator | Thursday 02 April 2026 00:57:46 +0000 (0:00:01.473) 0:00:16.490 ******** 2026-04-02 01:00:29.192698 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-04-02 01:00:29.192709 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-02 01:00:29.192717 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-02 01:00:29.192725 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-04-02 01:00:29.192738 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-02 01:00:29.192763 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-02 01:00:29.192772 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-02 01:00:29.192779 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-02 01:00:29.192790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-02 01:00:29.192798 | orchestrator | skipping: [testbed-manager] 2026-04-02 01:00:29.192805 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-02 01:00:29.192813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-02 01:00:29.192821 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-02 01:00:29.192838 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-02 01:00:29.192881 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-02 01:00:29.192889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-02 01:00:29.192896 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-02 01:00:29.192906 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-02 01:00:29.192913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-02 01:00:29.192921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-02 01:00:29.192934 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-02 01:00:29.192941 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:00:29.192949 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:00:29.192956 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:00:29.192981 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-02 01:00:29.192989 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-02 01:00:29.192996 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-02 01:00:29.193003 | orchestrator | skipping: [testbed-node-3] 2026-04-02 01:00:29.193011 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-02 01:00:29.193022 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-02 01:00:29.193029 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-02 01:00:29.193045 | orchestrator | skipping: [testbed-node-4] 2026-04-02 01:00:29.193053 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-02 01:00:29.193077 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-02 01:00:29.193105 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-02 01:00:29.193114 | orchestrator | skipping: [testbed-node-5] 2026-04-02 01:00:29.193121 | orchestrator | 2026-04-02 01:00:29.193128 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-04-02 01:00:29.193136 | orchestrator | Thursday 02 April 2026 00:57:48 +0000 (0:00:01.905) 0:00:18.395 ******** 2026-04-02 01:00:29.193143 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-02 01:00:29.193151 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-02 01:00:29.193162 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-02 01:00:29.193170 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-02 01:00:29.193183 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-02 01:00:29.193191 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-02 01:00:29.193215 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-02 01:00:29.193223 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-02 01:00:29.193231 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 01:00:29.193238 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 01:00:29.193249 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-02 01:00:29.193261 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-02 01:00:29.193269 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 01:00:29.193275 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-02 01:00:29.193297 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-02 01:00:29.193305 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 01:00:29.193313 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 01:00:29.193321 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-02 01:00:29.193332 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 01:00:29.193344 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-02 01:00:29.193352 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-02 01:00:29.193377 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-02 01:00:29.193385 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-02 01:00:29.193393 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-02 01:00:29.193403 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-02 01:00:29.193416 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 01:00:29.193424 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 01:00:29.193431 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 01:00:29.193438 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 01:00:29.193446 | orchestrator | 2026-04-02 01:00:29.193453 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-04-02 01:00:29.193460 | orchestrator | Thursday 02 April 2026 00:57:54 +0000 (0:00:06.596) 0:00:24.992 ******** 2026-04-02 01:00:29.193468 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-02 01:00:29.193475 | orchestrator | 2026-04-02 01:00:29.193483 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-04-02 01:00:29.193506 | orchestrator | Thursday 02 April 2026 00:57:55 +0000 (0:00:00.939) 0:00:25.932 ******** 2026-04-02 01:00:29.193514 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1095204, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2798872, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.193523 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1095204, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2798872, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.193538 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1095204, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2798872, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.193546 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1095204, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2798872, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.193554 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1095236, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2856627, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.193563 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1095236, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2856627, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.193587 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1095204, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2798872, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.193595 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1095236, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2856627, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.193603 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1095192, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2792835, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.193621 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1095204, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2798872, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.193629 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1095236, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2856627, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.193636 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1095204, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2798872, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-02 01:00:29.193644 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1095192, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2792835, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.193669 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1095192, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2792835, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.193677 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1095236, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2856627, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.193684 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1095192, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2792835, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.193699 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1095221, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2830906, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.193706 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1095236, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2856627, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.193714 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1095221, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2830906, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.193722 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1095221, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2830906, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.193746 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1095183, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2772517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.193755 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1095192, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2792835, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.193762 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1095221, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2830906, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.193779 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1095192, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2792835, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.193787 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1095221, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2830906, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.193794 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1095183, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2772517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.193802 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1095236, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2856627, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-02 01:00:29.193809 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1095183, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2772517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.193834 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1095207, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2806425, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.193847 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1095183, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2772517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.193882 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1095207, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2806425, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.193900 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1095183, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2772517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.193910 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1095221, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2830906, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.193917 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1095207, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2806425, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.193924 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1095217, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2828088, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.193950 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1095217, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2828088, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.193963 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1095207, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2806425, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.193971 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1095210, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2809918, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.193982 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1095217, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2828088, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.193990 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1095207, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2806425, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.193997 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1095183, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2772517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.194005 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1095217, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2828088, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.194092 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1095210, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2809918, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.194111 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1095203, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2792835, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.194119 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1095210, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2809918, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.194130 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1095203, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2792835, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.194138 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1095210, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2809918, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.194145 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095233, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2849734, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.194153 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095233, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2849734, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.194184 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1095203, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2792835, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.194192 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095172, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.27591, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.194200 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1095217, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2828088, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.194210 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095172, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.27591, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.194218 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1095207, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2806425, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.194226 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095233, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2849734, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.194233 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1095203, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2792835, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.194261 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1095210, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2809918, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.194269 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1095260, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2881982, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.194276 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1095260, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2881982, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.194288 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1095192, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2792835, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-02 01:00:29.194295 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095172, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.27591, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.194302 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095233, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2849734, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.194310 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1095203, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2792835, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.194340 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1095217, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2828088, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.194348 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1095228, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.284703, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.194356 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095172, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.27591, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.194366 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1095228, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.284703, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.194374 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1095260, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2881982, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.194381 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1095260, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2881982, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.194389 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095233, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2849734, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.194420 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1095210, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2809918, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.194428 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095187, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.277644, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.194435 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095187, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.277644, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.194446 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1095228, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.284703, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.194454 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1095228, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.284703, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.194461 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1095178, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2766397, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.194472 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095172, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.27591, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.194485 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1095178, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2766397, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.194492 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1095203, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2792835, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.194499 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095187, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.277644, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.194510 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095233, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2849734, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.194518 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1095221, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2830906, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-02 01:00:29.194525 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1095260, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2881982, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.194537 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1095214, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.282195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.194551 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095187, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.277644, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.194559 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1095214, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.282195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.194566 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1095178, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2766397, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.194577 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095172, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.27591, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.194585 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1095211, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.281723, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.194593 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1095211, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.281723, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.194604 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1095228, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.284703, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.194616 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1095178, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2766397, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.194623 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1095256, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2877223, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.194631 | orchestrator | skipping: [testbed-node-3] 2026-04-02 01:00:29.194639 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1095214, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.282195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.194650 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1095260, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2881982, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.194657 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1095256, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2877223, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.194669 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:00:29.194677 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095187, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.277644, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.194684 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1095211, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.281723, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.194695 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1095214, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.282195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.194702 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1095228, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.284703, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.194709 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1095183, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2772517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-02 01:00:29.194720 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1095178, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2766397, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.194728 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1095211, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.281723, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.194740 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095187, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.277644, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.194748 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1095256, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2877223, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.194755 | orchestrator | skipping: [testbed-node-5] 2026-04-02 01:00:29.194766 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1095214, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.282195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.194774 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1095256, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2877223, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.194782 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:00:29.194789 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1095178, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2766397, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.194800 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1095211, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.281723, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.194814 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1095214, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.282195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.194821 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1095256, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2877223, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.194829 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:00:29.194836 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1095207, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2806425, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-02 01:00:29.194847 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1095211, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.281723, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.194855 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1095256, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2877223, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-02 01:00:29.194882 | orchestrator | skipping: [testbed-node-4] 2026-04-02 01:00:29.194889 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1095217, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2828088, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-02 01:00:29.194900 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1095210, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2809918, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-02 01:00:29.194912 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1095203, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2792835, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-02 01:00:29.194919 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095233, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2849734, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-02 01:00:29.194927 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095172, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.27591, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-02 01:00:29.194937 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1095260, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2881982, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-02 01:00:29.194943 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1095228, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.284703, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-02 01:00:29.194949 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095187, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.277644, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-02 01:00:29.194960 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1095178, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2766397, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-02 01:00:29.194973 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1095214, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.282195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-02 01:00:29.194980 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1095211, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.281723, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-02 01:00:29.194988 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1095256, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2877223, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-02 01:00:29.194995 | orchestrator | 2026-04-02 01:00:29.195003 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-04-02 01:00:29.195010 | orchestrator | Thursday 02 April 2026 00:58:21 +0000 (0:00:26.065) 0:00:51.997 ******** 2026-04-02 01:00:29.195017 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-02 01:00:29.195025 | orchestrator | 2026-04-02 01:00:29.195035 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-04-02 01:00:29.195042 | orchestrator | Thursday 02 April 2026 00:58:22 +0000 (0:00:00.848) 0:00:52.845 ******** 2026-04-02 01:00:29.195050 | orchestrator | [WARNING]: Skipped 2026-04-02 01:00:29.195058 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-02 01:00:29.195065 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-04-02 01:00:29.195072 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-02 01:00:29.195080 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-04-02 01:00:29.195087 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-02 01:00:29.195094 | orchestrator | [WARNING]: Skipped 2026-04-02 01:00:29.195101 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-02 01:00:29.195108 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-04-02 01:00:29.195115 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-02 01:00:29.195123 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-04-02 01:00:29.195130 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-02 01:00:29.195137 | orchestrator | [WARNING]: Skipped 2026-04-02 01:00:29.195144 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-02 01:00:29.195155 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-04-02 01:00:29.195163 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-02 01:00:29.195170 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-04-02 01:00:29.195177 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-02 01:00:29.195184 | orchestrator | [WARNING]: Skipped 2026-04-02 01:00:29.195191 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-02 01:00:29.195198 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-04-02 01:00:29.195205 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-02 01:00:29.195212 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-04-02 01:00:29.195218 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-02 01:00:29.195227 | orchestrator | [WARNING]: Skipped 2026-04-02 01:00:29.195233 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-02 01:00:29.195246 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-04-02 01:00:29.195252 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-02 01:00:29.195258 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-04-02 01:00:29.195265 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-02 01:00:29.195270 | orchestrator | [WARNING]: Skipped 2026-04-02 01:00:29.195276 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-02 01:00:29.195281 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-04-02 01:00:29.195287 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-02 01:00:29.195293 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-04-02 01:00:29.195299 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-02 01:00:29.195304 | orchestrator | [WARNING]: Skipped 2026-04-02 01:00:29.195309 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-02 01:00:29.195315 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-04-02 01:00:29.195321 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-02 01:00:29.195326 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-04-02 01:00:29.195332 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-02 01:00:29.195337 | orchestrator | 2026-04-02 01:00:29.195343 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-04-02 01:00:29.195348 | orchestrator | Thursday 02 April 2026 00:58:24 +0000 (0:00:01.828) 0:00:54.673 ******** 2026-04-02 01:00:29.195354 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-02 01:00:29.195359 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-02 01:00:29.195365 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:00:29.195371 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:00:29.195376 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-02 01:00:29.195381 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:00:29.195387 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-02 01:00:29.195393 | orchestrator | skipping: [testbed-node-3] 2026-04-02 01:00:29.195482 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-02 01:00:29.195490 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-02 01:00:29.195495 | orchestrator | skipping: [testbed-node-5] 2026-04-02 01:00:29.195501 | orchestrator | skipping: [testbed-node-4] 2026-04-02 01:00:29.195506 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-04-02 01:00:29.195519 | orchestrator | 2026-04-02 01:00:29.195524 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-04-02 01:00:29.195530 | orchestrator | Thursday 02 April 2026 00:58:40 +0000 (0:00:15.446) 0:01:10.120 ******** 2026-04-02 01:00:29.195536 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-02 01:00:29.195548 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:00:29.195554 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-02 01:00:29.195559 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:00:29.195566 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-02 01:00:29.195571 | orchestrator | skipping: [testbed-node-3] 2026-04-02 01:00:29.195577 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-02 01:00:29.195582 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:00:29.195588 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-02 01:00:29.195593 | orchestrator | skipping: [testbed-node-4] 2026-04-02 01:00:29.195598 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-02 01:00:29.195604 | orchestrator | skipping: [testbed-node-5] 2026-04-02 01:00:29.195609 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-04-02 01:00:29.195615 | orchestrator | 2026-04-02 01:00:29.195621 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-04-02 01:00:29.195626 | orchestrator | Thursday 02 April 2026 00:58:43 +0000 (0:00:03.670) 0:01:13.790 ******** 2026-04-02 01:00:29.195632 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-02 01:00:29.195639 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-02 01:00:29.195645 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:00:29.195650 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:00:29.195657 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-04-02 01:00:29.195662 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-02 01:00:29.195668 | orchestrator | skipping: [testbed-node-4] 2026-04-02 01:00:29.195679 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-02 01:00:29.195685 | orchestrator | skipping: [testbed-node-5] 2026-04-02 01:00:29.195691 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-02 01:00:29.195696 | orchestrator | skipping: [testbed-node-3] 2026-04-02 01:00:29.195702 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-02 01:00:29.195708 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:00:29.195714 | orchestrator | 2026-04-02 01:00:29.195719 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-04-02 01:00:29.195725 | orchestrator | Thursday 02 April 2026 00:58:45 +0000 (0:00:01.875) 0:01:15.666 ******** 2026-04-02 01:00:29.195731 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-02 01:00:29.195737 | orchestrator | 2026-04-02 01:00:29.195742 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-04-02 01:00:29.195748 | orchestrator | Thursday 02 April 2026 00:58:46 +0000 (0:00:00.561) 0:01:16.228 ******** 2026-04-02 01:00:29.195755 | orchestrator | skipping: [testbed-manager] 2026-04-02 01:00:29.195766 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:00:29.195772 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:00:29.195778 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:00:29.195784 | orchestrator | skipping: [testbed-node-3] 2026-04-02 01:00:29.195789 | orchestrator | skipping: [testbed-node-4] 2026-04-02 01:00:29.195795 | orchestrator | skipping: [testbed-node-5] 2026-04-02 01:00:29.195801 | orchestrator | 2026-04-02 01:00:29.195806 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-04-02 01:00:29.195815 | orchestrator | Thursday 02 April 2026 00:58:46 +0000 (0:00:00.632) 0:01:16.860 ******** 2026-04-02 01:00:29.195822 | orchestrator | skipping: [testbed-manager] 2026-04-02 01:00:29.195829 | orchestrator | skipping: [testbed-node-3] 2026-04-02 01:00:29.195837 | orchestrator | skipping: [testbed-node-4] 2026-04-02 01:00:29.195843 | orchestrator | skipping: [testbed-node-5] 2026-04-02 01:00:29.195849 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:00:29.195854 | orchestrator | changed: [testbed-node-2] 2026-04-02 01:00:29.195913 | orchestrator | changed: [testbed-node-1] 2026-04-02 01:00:29.195919 | orchestrator | 2026-04-02 01:00:29.195924 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-04-02 01:00:29.195930 | orchestrator | Thursday 02 April 2026 00:58:49 +0000 (0:00:02.554) 0:01:19.414 ******** 2026-04-02 01:00:29.195936 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-02 01:00:29.195942 | orchestrator | skipping: [testbed-manager] 2026-04-02 01:00:29.195948 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-02 01:00:29.195954 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:00:29.195959 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-02 01:00:29.195964 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:00:29.195970 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-02 01:00:29.195975 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:00:29.195988 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-02 01:00:29.195994 | orchestrator | skipping: [testbed-node-4] 2026-04-02 01:00:29.196000 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-02 01:00:29.196005 | orchestrator | skipping: [testbed-node-3] 2026-04-02 01:00:29.196011 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-02 01:00:29.196017 | orchestrator | skipping: [testbed-node-5] 2026-04-02 01:00:29.196022 | orchestrator | 2026-04-02 01:00:29.196028 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-04-02 01:00:29.196033 | orchestrator | Thursday 02 April 2026 00:58:51 +0000 (0:00:02.275) 0:01:21.689 ******** 2026-04-02 01:00:29.196039 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-02 01:00:29.196045 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:00:29.196050 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-02 01:00:29.196056 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-02 01:00:29.196062 | orchestrator | skipping: [testbed-node-3] 2026-04-02 01:00:29.196068 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:00:29.196073 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-02 01:00:29.196079 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:00:29.196085 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-04-02 01:00:29.196091 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-02 01:00:29.196103 | orchestrator | skipping: [testbed-node-5] 2026-04-02 01:00:29.196109 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-02 01:00:29.196115 | orchestrator | skipping: [testbed-node-4] 2026-04-02 01:00:29.196121 | orchestrator | 2026-04-02 01:00:29.196127 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-04-02 01:00:29.196134 | orchestrator | Thursday 02 April 2026 00:58:53 +0000 (0:00:02.286) 0:01:23.976 ******** 2026-04-02 01:00:29.196145 | orchestrator | [WARNING]: Skipped 2026-04-02 01:00:29.196153 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-04-02 01:00:29.196159 | orchestrator | due to this access issue: 2026-04-02 01:00:29.196166 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-04-02 01:00:29.196172 | orchestrator | not a directory 2026-04-02 01:00:29.196179 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-02 01:00:29.196186 | orchestrator | 2026-04-02 01:00:29.196192 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-04-02 01:00:29.196199 | orchestrator | Thursday 02 April 2026 00:58:55 +0000 (0:00:01.348) 0:01:25.324 ******** 2026-04-02 01:00:29.196205 | orchestrator | skipping: [testbed-manager] 2026-04-02 01:00:29.196212 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:00:29.196218 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:00:29.196225 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:00:29.196232 | orchestrator | skipping: [testbed-node-3] 2026-04-02 01:00:29.196239 | orchestrator | skipping: [testbed-node-4] 2026-04-02 01:00:29.196245 | orchestrator | skipping: [testbed-node-5] 2026-04-02 01:00:29.196252 | orchestrator | 2026-04-02 01:00:29.196259 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-04-02 01:00:29.196266 | orchestrator | Thursday 02 April 2026 00:58:56 +0000 (0:00:00.773) 0:01:26.098 ******** 2026-04-02 01:00:29.196273 | orchestrator | skipping: [testbed-manager] 2026-04-02 01:00:29.196386 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:00:29.196394 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:00:29.196400 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:00:29.196406 | orchestrator | skipping: [testbed-node-3] 2026-04-02 01:00:29.196412 | orchestrator | skipping: [testbed-node-4] 2026-04-02 01:00:29.196419 | orchestrator | skipping: [testbed-node-5] 2026-04-02 01:00:29.196425 | orchestrator | 2026-04-02 01:00:29.196432 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2026-04-02 01:00:29.196438 | orchestrator | Thursday 02 April 2026 00:58:57 +0000 (0:00:00.921) 0:01:27.020 ******** 2026-04-02 01:00:29.196447 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-02 01:00:29.196463 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-02 01:00:29.196479 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-02 01:00:29.196486 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-02 01:00:29.196499 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-02 01:00:29.196506 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-02 01:00:29.196512 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-02 01:00:29.196518 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-02 01:00:29.196524 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 01:00:29.196535 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 01:00:29.196546 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-02 01:00:29.196553 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-02 01:00:29.196565 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 01:00:29.196571 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-02 01:00:29.196577 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-02 01:00:29.196584 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 01:00:29.196590 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 01:00:29.196603 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-02 01:00:29.196609 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-02 01:00:29.196615 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 01:00:29.196624 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-02 01:00:29.196631 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-02 01:00:29.196637 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-02 01:00:29.196643 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-02 01:00:29.196657 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 01:00:29.196663 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-02 01:00:29.196675 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 01:00:29.196688 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 01:00:29.196697 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-02 01:00:29.196705 | orchestrator | 2026-04-02 01:00:29.196713 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-04-02 01:00:29.196721 | orchestrator | Thursday 02 April 2026 00:59:01 +0000 (0:00:04.102) 0:01:31.122 ******** 2026-04-02 01:00:29.196728 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-04-02 01:00:29.196736 | orchestrator | skipping: [testbed-manager] 2026-04-02 01:00:29.196742 | orchestrator | 2026-04-02 01:00:29.196747 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-02 01:00:29.196752 | orchestrator | Thursday 02 April 2026 00:59:01 +0000 (0:00:00.830) 0:01:31.952 ******** 2026-04-02 01:00:29.196758 | orchestrator | 2026-04-02 01:00:29.196764 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-02 01:00:29.196769 | orchestrator | Thursday 02 April 2026 00:59:01 +0000 (0:00:00.056) 0:01:32.009 ******** 2026-04-02 01:00:29.196775 | orchestrator | 2026-04-02 01:00:29.196781 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-02 01:00:29.196787 | orchestrator | Thursday 02 April 2026 00:59:02 +0000 (0:00:00.055) 0:01:32.065 ******** 2026-04-02 01:00:29.196794 | orchestrator | 2026-04-02 01:00:29.196804 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-02 01:00:29.196811 | orchestrator | Thursday 02 April 2026 00:59:02 +0000 (0:00:00.050) 0:01:32.115 ******** 2026-04-02 01:00:29.196817 | orchestrator | 2026-04-02 01:00:29.196824 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-02 01:00:29.196830 | orchestrator | Thursday 02 April 2026 00:59:02 +0000 (0:00:00.061) 0:01:32.176 ******** 2026-04-02 01:00:29.196837 | orchestrator | 2026-04-02 01:00:29.196843 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-02 01:00:29.196850 | orchestrator | Thursday 02 April 2026 00:59:02 +0000 (0:00:00.049) 0:01:32.226 ******** 2026-04-02 01:00:29.196856 | orchestrator | 2026-04-02 01:00:29.196887 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-02 01:00:29.196894 | orchestrator | Thursday 02 April 2026 00:59:02 +0000 (0:00:00.047) 0:01:32.274 ******** 2026-04-02 01:00:29.196900 | orchestrator | 2026-04-02 01:00:29.196906 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-04-02 01:00:29.196913 | orchestrator | Thursday 02 April 2026 00:59:02 +0000 (0:00:00.067) 0:01:32.342 ******** 2026-04-02 01:00:29.196919 | orchestrator | changed: [testbed-manager] 2026-04-02 01:00:29.196926 | orchestrator | 2026-04-02 01:00:29.196932 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-04-02 01:00:29.196946 | orchestrator | Thursday 02 April 2026 00:59:15 +0000 (0:00:13.190) 0:01:45.533 ******** 2026-04-02 01:00:29.196953 | orchestrator | changed: [testbed-manager] 2026-04-02 01:00:29.196960 | orchestrator | changed: [testbed-node-1] 2026-04-02 01:00:29.196966 | orchestrator | changed: [testbed-node-3] 2026-04-02 01:00:29.196973 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:00:29.196980 | orchestrator | changed: [testbed-node-5] 2026-04-02 01:00:29.196987 | orchestrator | changed: [testbed-node-2] 2026-04-02 01:00:29.196995 | orchestrator | changed: [testbed-node-4] 2026-04-02 01:00:29.197001 | orchestrator | 2026-04-02 01:00:29.197008 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-04-02 01:00:29.197015 | orchestrator | Thursday 02 April 2026 00:59:29 +0000 (0:00:14.227) 0:01:59.761 ******** 2026-04-02 01:00:29.197021 | orchestrator | changed: [testbed-node-1] 2026-04-02 01:00:29.197032 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:00:29.197039 | orchestrator | changed: [testbed-node-2] 2026-04-02 01:00:29.197045 | orchestrator | 2026-04-02 01:00:29.197052 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-04-02 01:00:29.197059 | orchestrator | Thursday 02 April 2026 00:59:34 +0000 (0:00:04.842) 0:02:04.604 ******** 2026-04-02 01:00:29.197065 | orchestrator | changed: [testbed-node-1] 2026-04-02 01:00:29.197072 | orchestrator | changed: [testbed-node-2] 2026-04-02 01:00:29.197079 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:00:29.197085 | orchestrator | 2026-04-02 01:00:29.197092 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-04-02 01:00:29.197106 | orchestrator | Thursday 02 April 2026 00:59:44 +0000 (0:00:10.215) 0:02:14.819 ******** 2026-04-02 01:00:29.197113 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:00:29.197120 | orchestrator | changed: [testbed-node-4] 2026-04-02 01:00:29.197126 | orchestrator | changed: [testbed-node-5] 2026-04-02 01:00:29.197133 | orchestrator | changed: [testbed-node-3] 2026-04-02 01:00:29.197139 | orchestrator | changed: [testbed-node-1] 2026-04-02 01:00:29.197146 | orchestrator | changed: [testbed-node-2] 2026-04-02 01:00:29.197152 | orchestrator | changed: [testbed-manager] 2026-04-02 01:00:29.197159 | orchestrator | 2026-04-02 01:00:29.197165 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-04-02 01:00:29.197172 | orchestrator | Thursday 02 April 2026 00:59:57 +0000 (0:00:12.526) 0:02:27.346 ******** 2026-04-02 01:00:29.197179 | orchestrator | changed: [testbed-manager] 2026-04-02 01:00:29.197185 | orchestrator | 2026-04-02 01:00:29.197192 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-04-02 01:00:29.197203 | orchestrator | Thursday 02 April 2026 01:00:04 +0000 (0:00:06.734) 0:02:34.081 ******** 2026-04-02 01:00:29.197210 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:00:29.197217 | orchestrator | changed: [testbed-node-2] 2026-04-02 01:00:29.197230 | orchestrator | changed: [testbed-node-1] 2026-04-02 01:00:29.197251 | orchestrator | 2026-04-02 01:00:29.197258 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-04-02 01:00:29.197264 | orchestrator | Thursday 02 April 2026 01:00:09 +0000 (0:00:05.751) 0:02:39.832 ******** 2026-04-02 01:00:29.197270 | orchestrator | changed: [testbed-manager] 2026-04-02 01:00:29.197276 | orchestrator | 2026-04-02 01:00:29.197281 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-04-02 01:00:29.197287 | orchestrator | Thursday 02 April 2026 01:00:15 +0000 (0:00:05.779) 0:02:45.612 ******** 2026-04-02 01:00:29.197293 | orchestrator | changed: [testbed-node-3] 2026-04-02 01:00:29.197299 | orchestrator | changed: [testbed-node-4] 2026-04-02 01:00:29.197305 | orchestrator | changed: [testbed-node-5] 2026-04-02 01:00:29.197312 | orchestrator | 2026-04-02 01:00:29.197318 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-02 01:00:29.197325 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-04-02 01:00:29.197332 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-02 01:00:29.197338 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-02 01:00:29.197345 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-02 01:00:29.197351 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-02 01:00:29.197357 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-02 01:00:29.197364 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-02 01:00:29.197370 | orchestrator | 2026-04-02 01:00:29.197376 | orchestrator | 2026-04-02 01:00:29.197383 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-02 01:00:29.197389 | orchestrator | Thursday 02 April 2026 01:00:26 +0000 (0:00:10.793) 0:02:56.405 ******** 2026-04-02 01:00:29.197396 | orchestrator | =============================================================================== 2026-04-02 01:00:29.197402 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 26.07s 2026-04-02 01:00:29.197409 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 15.45s 2026-04-02 01:00:29.197415 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 14.23s 2026-04-02 01:00:29.197422 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 13.19s 2026-04-02 01:00:29.197428 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 12.53s 2026-04-02 01:00:29.197439 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 10.79s 2026-04-02 01:00:29.197446 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 10.22s 2026-04-02 01:00:29.197452 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 6.73s 2026-04-02 01:00:29.197459 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.60s 2026-04-02 01:00:29.197465 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 6.49s 2026-04-02 01:00:29.197472 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 5.78s 2026-04-02 01:00:29.197486 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 5.75s 2026-04-02 01:00:29.197492 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container --------------- 4.84s 2026-04-02 01:00:29.197499 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.10s 2026-04-02 01:00:29.197505 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 4.06s 2026-04-02 01:00:29.197510 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 3.67s 2026-04-02 01:00:29.197516 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.55s 2026-04-02 01:00:29.197522 | orchestrator | prometheus : Copying config file for blackbox exporter ------------------ 2.29s 2026-04-02 01:00:29.197528 | orchestrator | prometheus : Copying cloud config file for openstack exporter ----------- 2.28s 2026-04-02 01:00:29.197534 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 1.91s 2026-04-02 01:00:29.197540 | orchestrator | 2026-04-02 01:00:29 | INFO  | Task 8a268617-d281-42ee-af83-7f8954f88933 is in state STARTED 2026-04-02 01:00:29.197547 | orchestrator | 2026-04-02 01:00:29 | INFO  | Task 3ca9d9b3-5647-4e0f-8bfb-a42a4b2302c0 is in state STARTED 2026-04-02 01:00:29.197554 | orchestrator | 2026-04-02 01:00:29 | INFO  | Task 07dec8a4-4ad2-49c6-97d3-225141541951 is in state STARTED 2026-04-02 01:00:29.197560 | orchestrator | 2026-04-02 01:00:29 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:00:32.225964 | orchestrator | 2026-04-02 01:00:32 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:00:32.226217 | orchestrator | 2026-04-02 01:00:32 | INFO  | Task 8a268617-d281-42ee-af83-7f8954f88933 is in state STARTED 2026-04-02 01:00:32.226634 | orchestrator | 2026-04-02 01:00:32 | INFO  | Task 3ca9d9b3-5647-4e0f-8bfb-a42a4b2302c0 is in state STARTED 2026-04-02 01:00:32.227182 | orchestrator | 2026-04-02 01:00:32 | INFO  | Task 07dec8a4-4ad2-49c6-97d3-225141541951 is in state STARTED 2026-04-02 01:00:32.227221 | orchestrator | 2026-04-02 01:00:32 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:00:35.254219 | orchestrator | 2026-04-02 01:00:35 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:00:35.254416 | orchestrator | 2026-04-02 01:00:35 | INFO  | Task 8a268617-d281-42ee-af83-7f8954f88933 is in state STARTED 2026-04-02 01:00:35.255301 | orchestrator | 2026-04-02 01:00:35 | INFO  | Task 3ca9d9b3-5647-4e0f-8bfb-a42a4b2302c0 is in state STARTED 2026-04-02 01:00:35.256260 | orchestrator | 2026-04-02 01:00:35 | INFO  | Task 07dec8a4-4ad2-49c6-97d3-225141541951 is in state STARTED 2026-04-02 01:00:35.256295 | orchestrator | 2026-04-02 01:00:35 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:00:38.286636 | orchestrator | 2026-04-02 01:00:38 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:00:38.288549 | orchestrator | 2026-04-02 01:00:38 | INFO  | Task 8a268617-d281-42ee-af83-7f8954f88933 is in state STARTED 2026-04-02 01:00:38.289727 | orchestrator | 2026-04-02 01:00:38 | INFO  | Task 3ca9d9b3-5647-4e0f-8bfb-a42a4b2302c0 is in state STARTED 2026-04-02 01:00:38.291334 | orchestrator | 2026-04-02 01:00:38 | INFO  | Task 07dec8a4-4ad2-49c6-97d3-225141541951 is in state STARTED 2026-04-02 01:00:38.291373 | orchestrator | 2026-04-02 01:00:38 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:00:41.330386 | orchestrator | 2026-04-02 01:00:41 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:00:41.331740 | orchestrator | 2026-04-02 01:00:41 | INFO  | Task 8a268617-d281-42ee-af83-7f8954f88933 is in state STARTED 2026-04-02 01:00:41.333733 | orchestrator | 2026-04-02 01:00:41 | INFO  | Task 3ca9d9b3-5647-4e0f-8bfb-a42a4b2302c0 is in state STARTED 2026-04-02 01:00:41.335367 | orchestrator | 2026-04-02 01:00:41 | INFO  | Task 07dec8a4-4ad2-49c6-97d3-225141541951 is in state STARTED 2026-04-02 01:00:41.335555 | orchestrator | 2026-04-02 01:00:41 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:00:44.375333 | orchestrator | 2026-04-02 01:00:44 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:00:44.377675 | orchestrator | 2026-04-02 01:00:44 | INFO  | Task 8a268617-d281-42ee-af83-7f8954f88933 is in state STARTED 2026-04-02 01:00:44.379889 | orchestrator | 2026-04-02 01:00:44 | INFO  | Task 3ca9d9b3-5647-4e0f-8bfb-a42a4b2302c0 is in state STARTED 2026-04-02 01:00:44.381701 | orchestrator | 2026-04-02 01:00:44 | INFO  | Task 07dec8a4-4ad2-49c6-97d3-225141541951 is in state STARTED 2026-04-02 01:00:44.381755 | orchestrator | 2026-04-02 01:00:44 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:00:47.421865 | orchestrator | 2026-04-02 01:00:47 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:00:47.423562 | orchestrator | 2026-04-02 01:00:47 | INFO  | Task 8a268617-d281-42ee-af83-7f8954f88933 is in state STARTED 2026-04-02 01:00:47.425364 | orchestrator | 2026-04-02 01:00:47 | INFO  | Task 3ca9d9b3-5647-4e0f-8bfb-a42a4b2302c0 is in state STARTED 2026-04-02 01:00:47.427073 | orchestrator | 2026-04-02 01:00:47 | INFO  | Task 07dec8a4-4ad2-49c6-97d3-225141541951 is in state STARTED 2026-04-02 01:00:47.427098 | orchestrator | 2026-04-02 01:00:47 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:00:50.477665 | orchestrator | 2026-04-02 01:00:50 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:00:50.478141 | orchestrator | 2026-04-02 01:00:50 | INFO  | Task 8a268617-d281-42ee-af83-7f8954f88933 is in state STARTED 2026-04-02 01:00:50.479694 | orchestrator | 2026-04-02 01:00:50 | INFO  | Task 3ca9d9b3-5647-4e0f-8bfb-a42a4b2302c0 is in state STARTED 2026-04-02 01:00:50.480431 | orchestrator | 2026-04-02 01:00:50 | INFO  | Task 0f603ce7-70f2-4409-8168-6467c57cc248 is in state STARTED 2026-04-02 01:00:50.481767 | orchestrator | 2026-04-02 01:00:50 | INFO  | Task 07dec8a4-4ad2-49c6-97d3-225141541951 is in state SUCCESS 2026-04-02 01:00:50.483196 | orchestrator | 2026-04-02 01:00:50.483247 | orchestrator | 2026-04-02 01:00:50.483258 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-02 01:00:50.483267 | orchestrator | 2026-04-02 01:00:50.483287 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-02 01:00:50.483292 | orchestrator | Thursday 02 April 2026 00:57:37 +0000 (0:00:00.377) 0:00:00.377 ******** 2026-04-02 01:00:50.483298 | orchestrator | ok: [testbed-node-0] 2026-04-02 01:00:50.483305 | orchestrator | ok: [testbed-node-1] 2026-04-02 01:00:50.483311 | orchestrator | ok: [testbed-node-2] 2026-04-02 01:00:50.483318 | orchestrator | 2026-04-02 01:00:50.483327 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-02 01:00:50.483335 | orchestrator | Thursday 02 April 2026 00:57:37 +0000 (0:00:00.308) 0:00:00.685 ******** 2026-04-02 01:00:50.483340 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-04-02 01:00:50.483346 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-04-02 01:00:50.483352 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-04-02 01:00:50.483358 | orchestrator | 2026-04-02 01:00:50.483364 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-04-02 01:00:50.483370 | orchestrator | 2026-04-02 01:00:50.483375 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-02 01:00:50.483381 | orchestrator | Thursday 02 April 2026 00:57:37 +0000 (0:00:00.381) 0:00:01.067 ******** 2026-04-02 01:00:50.483386 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 01:00:50.483416 | orchestrator | 2026-04-02 01:00:50.483422 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2026-04-02 01:00:50.483427 | orchestrator | Thursday 02 April 2026 00:57:38 +0000 (0:00:00.724) 0:00:01.791 ******** 2026-04-02 01:00:50.483433 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2026-04-02 01:00:50.483439 | orchestrator | 2026-04-02 01:00:50.483444 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2026-04-02 01:00:50.483451 | orchestrator | Thursday 02 April 2026 00:57:47 +0000 (0:00:09.053) 0:00:10.845 ******** 2026-04-02 01:00:50.483457 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2026-04-02 01:00:50.483465 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2026-04-02 01:00:50.483471 | orchestrator | 2026-04-02 01:00:50.483477 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2026-04-02 01:00:50.483483 | orchestrator | Thursday 02 April 2026 00:57:56 +0000 (0:00:08.563) 0:00:19.409 ******** 2026-04-02 01:00:50.483489 | orchestrator | FAILED - RETRYING: [testbed-node-0]: glance | Creating projects (5 retries left). 2026-04-02 01:00:50.483496 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-02 01:00:50.483504 | orchestrator | 2026-04-02 01:00:50.483508 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2026-04-02 01:00:50.483512 | orchestrator | Thursday 02 April 2026 00:58:13 +0000 (0:00:17.498) 0:00:36.908 ******** 2026-04-02 01:00:50.483516 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2026-04-02 01:00:50.483520 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-02 01:00:50.483523 | orchestrator | 2026-04-02 01:00:50.483527 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2026-04-02 01:00:50.483531 | orchestrator | Thursday 02 April 2026 00:58:18 +0000 (0:00:04.662) 0:00:41.570 ******** 2026-04-02 01:00:50.483535 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-02 01:00:50.483539 | orchestrator | 2026-04-02 01:00:50.483543 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2026-04-02 01:00:50.483547 | orchestrator | Thursday 02 April 2026 00:58:22 +0000 (0:00:03.879) 0:00:45.450 ******** 2026-04-02 01:00:50.483551 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2026-04-02 01:00:50.483555 | orchestrator | 2026-04-02 01:00:50.483558 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-04-02 01:00:50.483564 | orchestrator | Thursday 02 April 2026 00:58:26 +0000 (0:00:04.162) 0:00:49.612 ******** 2026-04-02 01:00:50.483598 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-02 01:00:50.483616 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-02 01:00:50.483623 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-02 01:00:50.483630 | orchestrator | 2026-04-02 01:00:50.483636 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-02 01:00:50.483642 | orchestrator | Thursday 02 April 2026 00:58:29 +0000 (0:00:03.429) 0:00:53.042 ******** 2026-04-02 01:00:50.483648 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 01:00:50.483660 | orchestrator | 2026-04-02 01:00:50.483670 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-04-02 01:00:50.483677 | orchestrator | Thursday 02 April 2026 00:58:30 +0000 (0:00:00.611) 0:00:53.653 ******** 2026-04-02 01:00:50.483686 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:00:50.483690 | orchestrator | changed: [testbed-node-1] 2026-04-02 01:00:50.483693 | orchestrator | changed: [testbed-node-2] 2026-04-02 01:00:50.483697 | orchestrator | 2026-04-02 01:00:50.483701 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-04-02 01:00:50.483705 | orchestrator | Thursday 02 April 2026 00:58:33 +0000 (0:00:03.041) 0:00:56.695 ******** 2026-04-02 01:00:50.483709 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-02 01:00:50.483713 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-02 01:00:50.483717 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-02 01:00:50.483721 | orchestrator | 2026-04-02 01:00:50.483724 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-04-02 01:00:50.483728 | orchestrator | Thursday 02 April 2026 00:58:34 +0000 (0:00:01.441) 0:00:58.137 ******** 2026-04-02 01:00:50.483732 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-02 01:00:50.483736 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-02 01:00:50.483740 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-02 01:00:50.483743 | orchestrator | 2026-04-02 01:00:50.483748 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-04-02 01:00:50.483754 | orchestrator | Thursday 02 April 2026 00:58:35 +0000 (0:00:01.214) 0:00:59.352 ******** 2026-04-02 01:00:50.483760 | orchestrator | ok: [testbed-node-0] 2026-04-02 01:00:50.483765 | orchestrator | ok: [testbed-node-1] 2026-04-02 01:00:50.483771 | orchestrator | ok: [testbed-node-2] 2026-04-02 01:00:50.483779 | orchestrator | 2026-04-02 01:00:50.483786 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-04-02 01:00:50.483793 | orchestrator | Thursday 02 April 2026 00:58:36 +0000 (0:00:00.574) 0:00:59.926 ******** 2026-04-02 01:00:50.483800 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:00:50.483807 | orchestrator | 2026-04-02 01:00:50.483813 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-04-02 01:00:50.483820 | orchestrator | Thursday 02 April 2026 00:58:36 +0000 (0:00:00.117) 0:01:00.044 ******** 2026-04-02 01:00:50.483825 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:00:50.483830 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:00:50.483860 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:00:50.483866 | orchestrator | 2026-04-02 01:00:50.483870 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-02 01:00:50.483875 | orchestrator | Thursday 02 April 2026 00:58:36 +0000 (0:00:00.254) 0:01:00.298 ******** 2026-04-02 01:00:50.483880 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 01:00:50.483884 | orchestrator | 2026-04-02 01:00:50.483888 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-04-02 01:00:50.483893 | orchestrator | Thursday 02 April 2026 00:58:37 +0000 (0:00:00.558) 0:01:00.857 ******** 2026-04-02 01:00:50.483902 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-02 01:00:50.483916 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-02 01:00:50.483922 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-02 01:00:50.483931 | orchestrator | 2026-04-02 01:00:50.483935 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-04-02 01:00:50.483940 | orchestrator | Thursday 02 April 2026 00:58:41 +0000 (0:00:04.383) 0:01:05.240 ******** 2026-04-02 01:00:50.483953 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-02 01:00:50.483959 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:00:50.483964 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-02 01:00:50.483973 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:00:50.483986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-02 01:00:50.483992 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:00:50.483996 | orchestrator | 2026-04-02 01:00:50.484001 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-04-02 01:00:50.484005 | orchestrator | Thursday 02 April 2026 00:58:45 +0000 (0:00:03.752) 0:01:08.992 ******** 2026-04-02 01:00:50.484010 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-02 01:00:50.484019 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:00:50.484024 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-02 01:00:50.484029 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:00:50.484040 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-02 01:00:50.484045 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:00:50.484049 | orchestrator | 2026-04-02 01:00:50.484054 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-04-02 01:00:50.484059 | orchestrator | Thursday 02 April 2026 00:58:49 +0000 (0:00:03.780) 0:01:12.772 ******** 2026-04-02 01:00:50.484063 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:00:50.484068 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:00:50.484072 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:00:50.484084 | orchestrator | 2026-04-02 01:00:50.484089 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-04-02 01:00:50.484093 | orchestrator | Thursday 02 April 2026 00:58:54 +0000 (0:00:04.981) 0:01:17.754 ******** 2026-04-02 01:00:50.484098 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-02 01:00:50.484110 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-02 01:00:50.484116 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-02 01:00:50.484126 | orchestrator | 2026-04-02 01:00:50.484131 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-04-02 01:00:50.484135 | orchestrator | Thursday 02 April 2026 00:58:58 +0000 (0:00:04.021) 0:01:21.775 ******** 2026-04-02 01:00:50.484139 | orchestrator | changed: [testbed-node-1] 2026-04-02 01:00:50.484143 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:00:50.484147 | orchestrator | changed: [testbed-node-2] 2026-04-02 01:00:50.484151 | orchestrator | 2026-04-02 01:00:50.484155 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-04-02 01:00:50.484158 | orchestrator | Thursday 02 April 2026 00:59:04 +0000 (0:00:05.698) 0:01:27.473 ******** 2026-04-02 01:00:50.484162 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:00:50.484174 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:00:50.484178 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:00:50.484182 | orchestrator | 2026-04-02 01:00:50.484186 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2026-04-02 01:00:50.484196 | orchestrator | Thursday 02 April 2026 00:59:08 +0000 (0:00:03.933) 0:01:31.407 ******** 2026-04-02 01:00:50.484200 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:00:50.484204 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:00:50.484208 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:00:50.484212 | orchestrator | 2026-04-02 01:00:50.484216 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-04-02 01:00:50.484220 | orchestrator | Thursday 02 April 2026 00:59:11 +0000 (0:00:03.888) 0:01:35.295 ******** 2026-04-02 01:00:50.484227 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:00:50.484231 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:00:50.484234 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:00:50.484238 | orchestrator | 2026-04-02 01:00:50.484245 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-04-02 01:00:50.484249 | orchestrator | Thursday 02 April 2026 00:59:18 +0000 (0:00:06.872) 0:01:42.167 ******** 2026-04-02 01:00:50.484253 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:00:50.484257 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:00:50.484261 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:00:50.484265 | orchestrator | 2026-04-02 01:00:50.484269 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-04-02 01:00:50.484273 | orchestrator | Thursday 02 April 2026 00:59:24 +0000 (0:00:05.641) 0:01:47.809 ******** 2026-04-02 01:00:50.484277 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:00:50.484281 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:00:50.484285 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:00:50.484288 | orchestrator | 2026-04-02 01:00:50.484292 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-04-02 01:00:50.484300 | orchestrator | Thursday 02 April 2026 00:59:24 +0000 (0:00:00.354) 0:01:48.164 ******** 2026-04-02 01:00:50.484304 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-04-02 01:00:50.484308 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:00:50.484312 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-04-02 01:00:50.484316 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:00:50.484319 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-04-02 01:00:50.484323 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:00:50.484327 | orchestrator | 2026-04-02 01:00:50.484331 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-04-02 01:00:50.484335 | orchestrator | Thursday 02 April 2026 00:59:27 +0000 (0:00:03.080) 0:01:51.244 ******** 2026-04-02 01:00:50.484339 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:00:50.484343 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:00:50.484347 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:00:50.484350 | orchestrator | 2026-04-02 01:00:50.484354 | orchestrator | TASK [glance : Generating 'hostid' file for glance_api] ************************ 2026-04-02 01:00:50.484358 | orchestrator | Thursday 02 April 2026 00:59:31 +0000 (0:00:03.964) 0:01:55.209 ******** 2026-04-02 01:00:50.484362 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:00:50.484366 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:00:50.484370 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:00:50.484373 | orchestrator | 2026-04-02 01:00:50.484377 | orchestrator | TASK [glance : Check glance containers] **************************************** 2026-04-02 01:00:50.484381 | orchestrator | Thursday 02 April 2026 00:59:34 +0000 (0:00:03.022) 0:01:58.232 ******** 2026-04-02 01:00:50.484385 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-02 01:00:50.484396 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-02 01:00:50.484404 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-02 01:00:50.484409 | orchestrator | 2026-04-02 01:00:50.484413 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-02 01:00:50.484417 | orchestrator | Thursday 02 April 2026 00:59:38 +0000 (0:00:03.877) 0:02:02.109 ******** 2026-04-02 01:00:50.484420 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:00:50.484424 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:00:50.484428 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:00:50.484432 | orchestrator | 2026-04-02 01:00:50.484436 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2026-04-02 01:00:50.484440 | orchestrator | Thursday 02 April 2026 00:59:38 +0000 (0:00:00.242) 0:02:02.352 ******** 2026-04-02 01:00:50.484444 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:00:50.484447 | orchestrator | 2026-04-02 01:00:50.484451 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2026-04-02 01:00:50.484455 | orchestrator | Thursday 02 April 2026 00:59:41 +0000 (0:00:02.657) 0:02:05.009 ******** 2026-04-02 01:00:50.484459 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:00:50.484463 | orchestrator | 2026-04-02 01:00:50.484475 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-04-02 01:00:50.484489 | orchestrator | Thursday 02 April 2026 00:59:43 +0000 (0:00:02.270) 0:02:07.280 ******** 2026-04-02 01:00:50.484493 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:00:50.484497 | orchestrator | 2026-04-02 01:00:50.484500 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2026-04-02 01:00:50.484505 | orchestrator | Thursday 02 April 2026 00:59:46 +0000 (0:00:02.143) 0:02:09.423 ******** 2026-04-02 01:00:50.484509 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:00:50.484513 | orchestrator | 2026-04-02 01:00:50.484517 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-04-02 01:00:50.484524 | orchestrator | Thursday 02 April 2026 01:00:14 +0000 (0:00:28.377) 0:02:37.801 ******** 2026-04-02 01:00:50.484528 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:00:50.484532 | orchestrator | 2026-04-02 01:00:50.484539 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-04-02 01:00:50.484543 | orchestrator | Thursday 02 April 2026 01:00:16 +0000 (0:00:02.205) 0:02:40.007 ******** 2026-04-02 01:00:50.484547 | orchestrator | 2026-04-02 01:00:50.484551 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-04-02 01:00:50.484555 | orchestrator | Thursday 02 April 2026 01:00:16 +0000 (0:00:00.079) 0:02:40.086 ******** 2026-04-02 01:00:50.484559 | orchestrator | 2026-04-02 01:00:50.484563 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-04-02 01:00:50.484567 | orchestrator | Thursday 02 April 2026 01:00:16 +0000 (0:00:00.067) 0:02:40.153 ******** 2026-04-02 01:00:50.484571 | orchestrator | 2026-04-02 01:00:50.484575 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-04-02 01:00:50.484579 | orchestrator | Thursday 02 April 2026 01:00:16 +0000 (0:00:00.077) 0:02:40.231 ******** 2026-04-02 01:00:50.484582 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:00:50.484587 | orchestrator | changed: [testbed-node-2] 2026-04-02 01:00:50.484590 | orchestrator | changed: [testbed-node-1] 2026-04-02 01:00:50.484594 | orchestrator | 2026-04-02 01:00:50.484598 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-02 01:00:50.484602 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2026-04-02 01:00:50.484607 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-04-02 01:00:50.484611 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-04-02 01:00:50.484615 | orchestrator | 2026-04-02 01:00:50.484619 | orchestrator | 2026-04-02 01:00:50.484623 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-02 01:00:50.484627 | orchestrator | Thursday 02 April 2026 01:00:47 +0000 (0:00:30.919) 0:03:11.150 ******** 2026-04-02 01:00:50.484631 | orchestrator | =============================================================================== 2026-04-02 01:00:50.484635 | orchestrator | glance : Restart glance-api container ---------------------------------- 30.92s 2026-04-02 01:00:50.484639 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 28.38s 2026-04-02 01:00:50.484643 | orchestrator | service-ks-register : glance | Creating projects ----------------------- 17.50s 2026-04-02 01:00:50.484647 | orchestrator | service-ks-register : glance | Creating services ------------------------ 9.05s 2026-04-02 01:00:50.484651 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 8.56s 2026-04-02 01:00:50.484655 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 6.87s 2026-04-02 01:00:50.484659 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 5.70s 2026-04-02 01:00:50.484663 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 5.64s 2026-04-02 01:00:50.484667 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 4.98s 2026-04-02 01:00:50.484675 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.66s 2026-04-02 01:00:50.484679 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 4.38s 2026-04-02 01:00:50.484683 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.16s 2026-04-02 01:00:50.484686 | orchestrator | glance : Copying over config.json files for services -------------------- 4.02s 2026-04-02 01:00:50.484690 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 3.96s 2026-04-02 01:00:50.484694 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 3.93s 2026-04-02 01:00:50.484699 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 3.89s 2026-04-02 01:00:50.484702 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.88s 2026-04-02 01:00:50.484706 | orchestrator | glance : Check glance containers ---------------------------------------- 3.88s 2026-04-02 01:00:50.484711 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 3.78s 2026-04-02 01:00:50.484715 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 3.75s 2026-04-02 01:00:50.484719 | orchestrator | 2026-04-02 01:00:50 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:00:53.532710 | orchestrator | 2026-04-02 01:00:53 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:00:53.534955 | orchestrator | 2026-04-02 01:00:53 | INFO  | Task 8a268617-d281-42ee-af83-7f8954f88933 is in state STARTED 2026-04-02 01:00:53.536994 | orchestrator | 2026-04-02 01:00:53 | INFO  | Task 3ca9d9b3-5647-4e0f-8bfb-a42a4b2302c0 is in state STARTED 2026-04-02 01:00:53.538595 | orchestrator | 2026-04-02 01:00:53 | INFO  | Task 0f603ce7-70f2-4409-8168-6467c57cc248 is in state STARTED 2026-04-02 01:00:53.538627 | orchestrator | 2026-04-02 01:00:53 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:00:56.578087 | orchestrator | 2026-04-02 01:00:56 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:00:56.579481 | orchestrator | 2026-04-02 01:00:56 | INFO  | Task 8a268617-d281-42ee-af83-7f8954f88933 is in state STARTED 2026-04-02 01:00:56.580579 | orchestrator | 2026-04-02 01:00:56 | INFO  | Task 3ca9d9b3-5647-4e0f-8bfb-a42a4b2302c0 is in state STARTED 2026-04-02 01:00:56.581706 | orchestrator | 2026-04-02 01:00:56 | INFO  | Task 0f603ce7-70f2-4409-8168-6467c57cc248 is in state STARTED 2026-04-02 01:00:56.582008 | orchestrator | 2026-04-02 01:00:56 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:00:59.623490 | orchestrator | 2026-04-02 01:00:59 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:00:59.623727 | orchestrator | 2026-04-02 01:00:59 | INFO  | Task 8a268617-d281-42ee-af83-7f8954f88933 is in state STARTED 2026-04-02 01:00:59.626646 | orchestrator | 2026-04-02 01:00:59 | INFO  | Task 3ca9d9b3-5647-4e0f-8bfb-a42a4b2302c0 is in state STARTED 2026-04-02 01:00:59.627260 | orchestrator | 2026-04-02 01:00:59 | INFO  | Task 0f603ce7-70f2-4409-8168-6467c57cc248 is in state STARTED 2026-04-02 01:00:59.627285 | orchestrator | 2026-04-02 01:00:59 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:01:02.665689 | orchestrator | 2026-04-02 01:01:02 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:01:02.672220 | orchestrator | 2026-04-02 01:01:02.672593 | orchestrator | 2026-04-02 01:01:02.672612 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-02 01:01:02.672624 | orchestrator | 2026-04-02 01:01:02.672634 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-02 01:01:02.672644 | orchestrator | Thursday 02 April 2026 00:58:08 +0000 (0:00:00.273) 0:00:00.273 ******** 2026-04-02 01:01:02.672749 | orchestrator | ok: [testbed-node-0] 2026-04-02 01:01:02.672763 | orchestrator | ok: [testbed-node-1] 2026-04-02 01:01:02.673006 | orchestrator | ok: [testbed-node-2] 2026-04-02 01:01:02.673019 | orchestrator | 2026-04-02 01:01:02.673033 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-02 01:01:02.673051 | orchestrator | Thursday 02 April 2026 00:58:08 +0000 (0:00:00.251) 0:00:00.525 ******** 2026-04-02 01:01:02.673078 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-04-02 01:01:02.673095 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-04-02 01:01:02.673113 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-04-02 01:01:02.673129 | orchestrator | 2026-04-02 01:01:02.673147 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-04-02 01:01:02.673164 | orchestrator | 2026-04-02 01:01:02.673182 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-02 01:01:02.673200 | orchestrator | Thursday 02 April 2026 00:58:09 +0000 (0:00:00.223) 0:00:00.748 ******** 2026-04-02 01:01:02.673218 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 01:01:02.673237 | orchestrator | 2026-04-02 01:01:02.673253 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2026-04-02 01:01:02.673263 | orchestrator | Thursday 02 April 2026 00:58:09 +0000 (0:00:00.525) 0:00:01.273 ******** 2026-04-02 01:01:02.673273 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2026-04-02 01:01:02.673283 | orchestrator | 2026-04-02 01:01:02.673293 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2026-04-02 01:01:02.673302 | orchestrator | Thursday 02 April 2026 00:58:13 +0000 (0:00:04.146) 0:00:05.420 ******** 2026-04-02 01:01:02.673313 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2026-04-02 01:01:02.673323 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2026-04-02 01:01:02.673332 | orchestrator | 2026-04-02 01:01:02.673342 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2026-04-02 01:01:02.673352 | orchestrator | Thursday 02 April 2026 00:58:21 +0000 (0:00:07.692) 0:00:13.112 ******** 2026-04-02 01:01:02.673361 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-02 01:01:02.673371 | orchestrator | 2026-04-02 01:01:02.673381 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2026-04-02 01:01:02.673391 | orchestrator | Thursday 02 April 2026 00:58:25 +0000 (0:00:04.006) 0:00:17.119 ******** 2026-04-02 01:01:02.673400 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2026-04-02 01:01:02.673410 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-02 01:01:02.673420 | orchestrator | 2026-04-02 01:01:02.673429 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2026-04-02 01:01:02.673439 | orchestrator | Thursday 02 April 2026 00:58:29 +0000 (0:00:04.059) 0:00:21.179 ******** 2026-04-02 01:01:02.673449 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-02 01:01:02.673458 | orchestrator | 2026-04-02 01:01:02.673468 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2026-04-02 01:01:02.673477 | orchestrator | Thursday 02 April 2026 00:58:32 +0000 (0:00:03.208) 0:00:24.387 ******** 2026-04-02 01:01:02.673487 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2026-04-02 01:01:02.673497 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2026-04-02 01:01:02.673513 | orchestrator | 2026-04-02 01:01:02.673537 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-04-02 01:01:02.673557 | orchestrator | Thursday 02 April 2026 00:58:40 +0000 (0:00:07.187) 0:00:31.575 ******** 2026-04-02 01:01:02.673594 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-02 01:01:02.673702 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-02 01:01:02.673723 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-02 01:01:02.673736 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-02 01:01:02.673749 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-02 01:01:02.673769 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-02 01:01:02.673789 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-02 01:01:02.673856 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-02 01:01:02.673871 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-02 01:01:02.673884 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-02 01:01:02.673897 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-02 01:01:02.673913 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-02 01:01:02.673934 | orchestrator | 2026-04-02 01:01:02.673947 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-02 01:01:02.673959 | orchestrator | Thursday 02 April 2026 00:58:43 +0000 (0:00:03.403) 0:00:34.978 ******** 2026-04-02 01:01:02.673971 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:01:02.673980 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:01:02.673990 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:01:02.673999 | orchestrator | 2026-04-02 01:01:02.674009 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-02 01:01:02.674065 | orchestrator | Thursday 02 April 2026 00:58:43 +0000 (0:00:00.357) 0:00:35.336 ******** 2026-04-02 01:01:02.674076 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 01:01:02.674086 | orchestrator | 2026-04-02 01:01:02.674095 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-04-02 01:01:02.674105 | orchestrator | Thursday 02 April 2026 00:58:44 +0000 (0:00:00.637) 0:00:35.974 ******** 2026-04-02 01:01:02.674140 | orchestrator | changed: [testbed-node-0] => (item=cinder-volume) 2026-04-02 01:01:02.674152 | orchestrator | changed: [testbed-node-1] => (item=cinder-volume) 2026-04-02 01:01:02.674174 | orchestrator | changed: [testbed-node-2] => (item=cinder-volume) 2026-04-02 01:01:02.674184 | orchestrator | changed: [testbed-node-0] => (item=cinder-backup) 2026-04-02 01:01:02.674203 | orchestrator | changed: [testbed-node-1] => (item=cinder-backup) 2026-04-02 01:01:02.674213 | orchestrator | changed: [testbed-node-2] => (item=cinder-backup) 2026-04-02 01:01:02.674222 | orchestrator | 2026-04-02 01:01:02.674232 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-04-02 01:01:02.674242 | orchestrator | Thursday 02 April 2026 00:58:46 +0000 (0:00:02.523) 0:00:38.497 ******** 2026-04-02 01:01:02.674253 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-02 01:01:02.674264 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-02 01:01:02.674282 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-02 01:01:02.674296 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-02 01:01:02.674333 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-02 01:01:02.674345 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-02 01:01:02.674355 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-02 01:01:02.674372 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-02 01:01:02.674386 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-02 01:01:02.674419 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-02 01:01:02.674430 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-02 01:01:02.674440 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-02 01:01:02.674450 | orchestrator | 2026-04-02 01:01:02.674460 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-04-02 01:01:02.674476 | orchestrator | Thursday 02 April 2026 00:58:51 +0000 (0:00:04.338) 0:00:42.836 ******** 2026-04-02 01:01:02.674485 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-04-02 01:01:02.674495 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-04-02 01:01:02.674505 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-04-02 01:01:02.674515 | orchestrator | 2026-04-02 01:01:02.674525 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-04-02 01:01:02.674534 | orchestrator | Thursday 02 April 2026 00:58:52 +0000 (0:00:01.690) 0:00:44.526 ******** 2026-04-02 01:01:02.674544 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder.keyring) 2026-04-02 01:01:02.674554 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder.keyring) 2026-04-02 01:01:02.674563 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder.keyring) 2026-04-02 01:01:02.674573 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder-backup.keyring) 2026-04-02 01:01:02.674582 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder-backup.keyring) 2026-04-02 01:01:02.674592 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder-backup.keyring) 2026-04-02 01:01:02.674601 | orchestrator | 2026-04-02 01:01:02.674611 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-04-02 01:01:02.674620 | orchestrator | Thursday 02 April 2026 00:58:56 +0000 (0:00:03.180) 0:00:47.707 ******** 2026-04-02 01:01:02.674630 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-04-02 01:01:02.674640 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-04-02 01:01:02.674649 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-04-02 01:01:02.674659 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-04-02 01:01:02.674792 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-04-02 01:01:02.674809 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-04-02 01:01:02.674841 | orchestrator | 2026-04-02 01:01:02.674853 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-04-02 01:01:02.674863 | orchestrator | Thursday 02 April 2026 00:58:57 +0000 (0:00:01.350) 0:00:49.057 ******** 2026-04-02 01:01:02.674872 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:01:02.674882 | orchestrator | 2026-04-02 01:01:02.674891 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-04-02 01:01:02.674901 | orchestrator | Thursday 02 April 2026 00:58:57 +0000 (0:00:00.269) 0:00:49.327 ******** 2026-04-02 01:01:02.674911 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:01:02.674920 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:01:02.674930 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:01:02.674939 | orchestrator | 2026-04-02 01:01:02.674949 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-02 01:01:02.674958 | orchestrator | Thursday 02 April 2026 00:58:58 +0000 (0:00:00.313) 0:00:49.641 ******** 2026-04-02 01:01:02.674968 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 01:01:02.674978 | orchestrator | 2026-04-02 01:01:02.674988 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-04-02 01:01:02.675029 | orchestrator | Thursday 02 April 2026 00:58:58 +0000 (0:00:00.667) 0:00:50.309 ******** 2026-04-02 01:01:02.675041 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-02 01:01:02.675063 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-02 01:01:02.675074 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-02 01:01:02.675084 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-02 01:01:02.675095 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-02 01:01:02.675133 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-02 01:01:02.675152 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-02 01:01:02.675253 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-02 01:01:02.675277 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-02 01:01:02.675292 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-02 01:01:02.675302 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-02 01:01:02.675321 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/ci2026-04-02 01:01:02 | INFO  | Task 8a268617-d281-42ee-af83-7f8954f88933 is in state SUCCESS 2026-04-02 01:01:02.675333 | orchestrator | nder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-02 01:01:02.675349 | orchestrator | 2026-04-02 01:01:02.675359 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-04-02 01:01:02.675369 | orchestrator | Thursday 02 April 2026 00:59:03 +0000 (0:00:04.494) 0:00:54.804 ******** 2026-04-02 01:01:02.675380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-02 01:01:02.675390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-02 01:01:02.675400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-02 01:01:02.675415 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-02 01:01:02.675426 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:01:02.675447 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-02 01:01:02.675466 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-02 01:01:02.675479 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-02 01:01:02.675491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-02 01:01:02.675503 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:01:02.675519 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-02 01:01:02.675532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-02 01:01:02.675557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-02 01:01:02.675571 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-02 01:01:02.675583 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:01:02.675594 | orchestrator | 2026-04-02 01:01:02.675606 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-04-02 01:01:02.675618 | orchestrator | Thursday 02 April 2026 00:59:03 +0000 (0:00:00.695) 0:00:55.499 ******** 2026-04-02 01:01:02.675630 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-02 01:01:02.675643 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-02 01:01:02.675660 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-02 01:01:02.675684 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-02 01:01:02.675696 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:01:02.675709 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-02 01:01:02.675721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-02 01:01:02.675741 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-02 01:01:02.675766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-02 01:01:02.675784 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:01:02.675811 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-02 01:01:02.675861 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-02 01:01:02.675875 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-02 01:01:02.675885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-02 01:01:02.675895 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:01:02.675905 | orchestrator | 2026-04-02 01:01:02.675915 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-04-02 01:01:02.675925 | orchestrator | Thursday 02 April 2026 00:59:04 +0000 (0:00:00.840) 0:00:56.339 ******** 2026-04-02 01:01:02.675940 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-02 01:01:02.675957 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-02 01:01:02.675975 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-02 01:01:02.675986 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-02 01:01:02.676100 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-02 01:01:02.676115 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-02 01:01:02.676130 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-02 01:01:02.676152 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-02 01:01:02.676169 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-02 01:01:02.676180 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-02 01:01:02.676190 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-02 01:01:02.676200 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-02 01:01:02.676210 | orchestrator | 2026-04-02 01:01:02.676225 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-04-02 01:01:02.676235 | orchestrator | Thursday 02 April 2026 00:59:09 +0000 (0:00:05.177) 0:01:01.516 ******** 2026-04-02 01:01:02.676245 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-04-02 01:01:02.676255 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-04-02 01:01:02.676269 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-04-02 01:01:02.676278 | orchestrator | 2026-04-02 01:01:02.676288 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-04-02 01:01:02.676298 | orchestrator | Thursday 02 April 2026 00:59:12 +0000 (0:00:02.632) 0:01:04.149 ******** 2026-04-02 01:01:02.676314 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-02 01:01:02.676325 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-02 01:01:02.676335 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-02 01:01:02.676346 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-02 01:01:02.676362 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-02 01:01:02.676378 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-02 01:01:02.676396 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-02 01:01:02.676408 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-02 01:01:02.676419 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-02 01:01:02.676431 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-02 01:01:02.676449 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-02 01:01:02.676464 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-02 01:01:02.676476 | orchestrator | 2026-04-02 01:01:02.676487 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-04-02 01:01:02.676498 | orchestrator | Thursday 02 April 2026 00:59:28 +0000 (0:00:15.552) 0:01:19.701 ******** 2026-04-02 01:01:02.676509 | orchestrator | changed: [testbed-node-2] 2026-04-02 01:01:02.676520 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:01:02.676531 | orchestrator | changed: [testbed-node-1] 2026-04-02 01:01:02.676542 | orchestrator | 2026-04-02 01:01:02.676559 | orchestrator | TASK [cinder : Generating 'hostid' file for cinder_volume] ********************* 2026-04-02 01:01:02.676570 | orchestrator | Thursday 02 April 2026 00:59:29 +0000 (0:00:01.525) 0:01:21.227 ******** 2026-04-02 01:01:02.676581 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:01:02.676592 | orchestrator | changed: [testbed-node-2] 2026-04-02 01:01:02.676603 | orchestrator | changed: [testbed-node-1] 2026-04-02 01:01:02.676614 | orchestrator | 2026-04-02 01:01:02.676624 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-04-02 01:01:02.676635 | orchestrator | Thursday 02 April 2026 00:59:31 +0000 (0:00:02.108) 0:01:23.335 ******** 2026-04-02 01:01:02.676647 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-02 01:01:02.676659 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-02 01:01:02.676676 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-02 01:01:02.676692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-02 01:01:02.676704 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:01:02.676721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-02 01:01:02.676733 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-02 01:01:02.676744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-02 01:01:02.676756 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-02 01:01:02.676784 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:01:02.676804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-02 01:01:02.676870 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-02 01:01:02.676905 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-02 01:01:02.676926 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-02 01:01:02.676947 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:01:02.676968 | orchestrator | 2026-04-02 01:01:02.676989 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-04-02 01:01:02.677010 | orchestrator | Thursday 02 April 2026 00:59:32 +0000 (0:00:00.897) 0:01:24.232 ******** 2026-04-02 01:01:02.677030 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:01:02.677064 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:01:02.677083 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:01:02.677103 | orchestrator | 2026-04-02 01:01:02.677123 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2026-04-02 01:01:02.677142 | orchestrator | Thursday 02 April 2026 00:59:32 +0000 (0:00:00.310) 0:01:24.543 ******** 2026-04-02 01:01:02.677164 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-02 01:01:02.677193 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-02 01:01:02.677216 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-02 01:01:02.677250 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-02 01:01:02.677274 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-02 01:01:02.677308 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-02 01:01:02.677323 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-02 01:01:02.677340 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-02 01:01:02.677351 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-02 01:01:02.677496 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-02 01:01:02.677518 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-02 01:01:02.677565 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-02 01:01:02.677589 | orchestrator | 2026-04-02 01:01:02.677608 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-02 01:01:02.677627 | orchestrator | Thursday 02 April 2026 00:59:36 +0000 (0:00:03.044) 0:01:27.587 ******** 2026-04-02 01:01:02.677767 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:01:02.677794 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:01:02.677813 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:01:02.677859 | orchestrator | 2026-04-02 01:01:02.677878 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2026-04-02 01:01:02.677896 | orchestrator | Thursday 02 April 2026 00:59:36 +0000 (0:00:00.318) 0:01:27.905 ******** 2026-04-02 01:01:02.677915 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:01:02.677934 | orchestrator | 2026-04-02 01:01:02.677954 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2026-04-02 01:01:02.677973 | orchestrator | Thursday 02 April 2026 00:59:38 +0000 (0:00:02.406) 0:01:30.312 ******** 2026-04-02 01:01:02.677991 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:01:02.678006 | orchestrator | 2026-04-02 01:01:02.678075 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-04-02 01:01:02.678088 | orchestrator | Thursday 02 April 2026 00:59:41 +0000 (0:00:02.800) 0:01:33.112 ******** 2026-04-02 01:01:02.678099 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:01:02.678110 | orchestrator | 2026-04-02 01:01:02.678120 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-04-02 01:01:02.678131 | orchestrator | Thursday 02 April 2026 01:00:01 +0000 (0:00:19.556) 0:01:52.669 ******** 2026-04-02 01:01:02.678142 | orchestrator | 2026-04-02 01:01:02.678153 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-04-02 01:01:02.678164 | orchestrator | Thursday 02 April 2026 01:00:01 +0000 (0:00:00.062) 0:01:52.732 ******** 2026-04-02 01:01:02.678175 | orchestrator | 2026-04-02 01:01:02.678195 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-04-02 01:01:02.678207 | orchestrator | Thursday 02 April 2026 01:00:01 +0000 (0:00:00.061) 0:01:52.793 ******** 2026-04-02 01:01:02.678217 | orchestrator | 2026-04-02 01:01:02.678228 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-04-02 01:01:02.678239 | orchestrator | Thursday 02 April 2026 01:00:01 +0000 (0:00:00.064) 0:01:52.858 ******** 2026-04-02 01:01:02.678250 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:01:02.678261 | orchestrator | changed: [testbed-node-1] 2026-04-02 01:01:02.678272 | orchestrator | changed: [testbed-node-2] 2026-04-02 01:01:02.678283 | orchestrator | 2026-04-02 01:01:02.678294 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-04-02 01:01:02.678305 | orchestrator | Thursday 02 April 2026 01:00:23 +0000 (0:00:22.649) 0:02:15.507 ******** 2026-04-02 01:01:02.678316 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:01:02.678337 | orchestrator | changed: [testbed-node-2] 2026-04-02 01:01:02.678348 | orchestrator | changed: [testbed-node-1] 2026-04-02 01:01:02.678359 | orchestrator | 2026-04-02 01:01:02.678370 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-04-02 01:01:02.678381 | orchestrator | Thursday 02 April 2026 01:00:29 +0000 (0:00:05.792) 0:02:21.300 ******** 2026-04-02 01:01:02.678394 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:01:02.678408 | orchestrator | changed: [testbed-node-1] 2026-04-02 01:01:02.678421 | orchestrator | changed: [testbed-node-2] 2026-04-02 01:01:02.678434 | orchestrator | 2026-04-02 01:01:02.678460 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-04-02 01:01:02.678474 | orchestrator | Thursday 02 April 2026 01:00:49 +0000 (0:00:19.978) 0:02:41.279 ******** 2026-04-02 01:01:02.678488 | orchestrator | changed: [testbed-node-2] 2026-04-02 01:01:02.678501 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:01:02.678515 | orchestrator | changed: [testbed-node-1] 2026-04-02 01:01:02.678529 | orchestrator | 2026-04-02 01:01:02.678542 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-04-02 01:01:02.678555 | orchestrator | Thursday 02 April 2026 01:01:00 +0000 (0:00:10.532) 0:02:51.812 ******** 2026-04-02 01:01:02.678569 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:01:02.678582 | orchestrator | 2026-04-02 01:01:02.678596 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-02 01:01:02.678617 | orchestrator | testbed-node-0 : ok=31  changed=23  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-02 01:01:02.678647 | orchestrator | testbed-node-1 : ok=22  changed=16  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-02 01:01:02.678668 | orchestrator | testbed-node-2 : ok=22  changed=16  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-02 01:01:02.678689 | orchestrator | 2026-04-02 01:01:02.678709 | orchestrator | 2026-04-02 01:01:02.678730 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-02 01:01:02.678749 | orchestrator | Thursday 02 April 2026 01:01:00 +0000 (0:00:00.215) 0:02:52.027 ******** 2026-04-02 01:01:02.678765 | orchestrator | =============================================================================== 2026-04-02 01:01:02.678782 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 22.65s 2026-04-02 01:01:02.678808 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 19.98s 2026-04-02 01:01:02.678890 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 19.56s 2026-04-02 01:01:02.678910 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 15.55s 2026-04-02 01:01:02.678927 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 10.53s 2026-04-02 01:01:02.678946 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 7.69s 2026-04-02 01:01:02.678963 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 7.19s 2026-04-02 01:01:02.678984 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 5.79s 2026-04-02 01:01:02.679003 | orchestrator | cinder : Copying over config.json files for services -------------------- 5.18s 2026-04-02 01:01:02.679022 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 4.49s 2026-04-02 01:01:02.679037 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 4.34s 2026-04-02 01:01:02.679048 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 4.15s 2026-04-02 01:01:02.679059 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.06s 2026-04-02 01:01:02.679070 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 4.01s 2026-04-02 01:01:02.679080 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 3.40s 2026-04-02 01:01:02.679103 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.21s 2026-04-02 01:01:02.679114 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.18s 2026-04-02 01:01:02.679125 | orchestrator | cinder : Check cinder containers ---------------------------------------- 3.04s 2026-04-02 01:01:02.679136 | orchestrator | cinder : Creating Cinder database user and setting permissions ---------- 2.80s 2026-04-02 01:01:02.679147 | orchestrator | cinder : Copying over cinder-wsgi.conf ---------------------------------- 2.63s 2026-04-02 01:01:02.679158 | orchestrator | 2026-04-02 01:01:02 | INFO  | Task 7ea6187d-bde3-485c-9d5e-be0cd88de8a2 is in state STARTED 2026-04-02 01:01:02.679170 | orchestrator | 2026-04-02 01:01:02 | INFO  | Task 3ca9d9b3-5647-4e0f-8bfb-a42a4b2302c0 is in state STARTED 2026-04-02 01:01:02.679188 | orchestrator | 2026-04-02 01:01:02 | INFO  | Task 0f603ce7-70f2-4409-8168-6467c57cc248 is in state STARTED 2026-04-02 01:01:02.679199 | orchestrator | 2026-04-02 01:01:02 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:01:05.714576 | orchestrator | 2026-04-02 01:01:05 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:01:05.714664 | orchestrator | 2026-04-02 01:01:05 | INFO  | Task 7ea6187d-bde3-485c-9d5e-be0cd88de8a2 is in state STARTED 2026-04-02 01:01:05.714672 | orchestrator | 2026-04-02 01:01:05 | INFO  | Task 3ca9d9b3-5647-4e0f-8bfb-a42a4b2302c0 is in state STARTED 2026-04-02 01:01:05.715637 | orchestrator | 2026-04-02 01:01:05 | INFO  | Task 0f603ce7-70f2-4409-8168-6467c57cc248 is in state STARTED 2026-04-02 01:01:05.715688 | orchestrator | 2026-04-02 01:01:05 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:01:08.758358 | orchestrator | 2026-04-02 01:01:08 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:01:08.758708 | orchestrator | 2026-04-02 01:01:08 | INFO  | Task 7ea6187d-bde3-485c-9d5e-be0cd88de8a2 is in state STARTED 2026-04-02 01:01:08.759428 | orchestrator | 2026-04-02 01:01:08 | INFO  | Task 3ca9d9b3-5647-4e0f-8bfb-a42a4b2302c0 is in state STARTED 2026-04-02 01:01:08.760060 | orchestrator | 2026-04-02 01:01:08 | INFO  | Task 0f603ce7-70f2-4409-8168-6467c57cc248 is in state STARTED 2026-04-02 01:01:08.760097 | orchestrator | 2026-04-02 01:01:08 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:01:11.786225 | orchestrator | 2026-04-02 01:01:11 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:01:11.786722 | orchestrator | 2026-04-02 01:01:11 | INFO  | Task 7ea6187d-bde3-485c-9d5e-be0cd88de8a2 is in state STARTED 2026-04-02 01:01:11.787800 | orchestrator | 2026-04-02 01:01:11 | INFO  | Task 3ca9d9b3-5647-4e0f-8bfb-a42a4b2302c0 is in state STARTED 2026-04-02 01:01:11.789024 | orchestrator | 2026-04-02 01:01:11 | INFO  | Task 0f603ce7-70f2-4409-8168-6467c57cc248 is in state STARTED 2026-04-02 01:01:11.789189 | orchestrator | 2026-04-02 01:01:11 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:01:14.838008 | orchestrator | 2026-04-02 01:01:14 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:01:14.838574 | orchestrator | 2026-04-02 01:01:14 | INFO  | Task 7ea6187d-bde3-485c-9d5e-be0cd88de8a2 is in state STARTED 2026-04-02 01:01:14.839209 | orchestrator | 2026-04-02 01:01:14 | INFO  | Task 3ca9d9b3-5647-4e0f-8bfb-a42a4b2302c0 is in state STARTED 2026-04-02 01:01:14.841557 | orchestrator | 2026-04-02 01:01:14 | INFO  | Task 0f603ce7-70f2-4409-8168-6467c57cc248 is in state STARTED 2026-04-02 01:01:14.841619 | orchestrator | 2026-04-02 01:01:14 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:01:17.881396 | orchestrator | 2026-04-02 01:01:17 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:01:17.883301 | orchestrator | 2026-04-02 01:01:17 | INFO  | Task 7ea6187d-bde3-485c-9d5e-be0cd88de8a2 is in state STARTED 2026-04-02 01:01:17.884708 | orchestrator | 2026-04-02 01:01:17 | INFO  | Task 3ca9d9b3-5647-4e0f-8bfb-a42a4b2302c0 is in state STARTED 2026-04-02 01:01:17.887377 | orchestrator | 2026-04-02 01:01:17 | INFO  | Task 0f603ce7-70f2-4409-8168-6467c57cc248 is in state STARTED 2026-04-02 01:01:17.887467 | orchestrator | 2026-04-02 01:01:17 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:01:20.924619 | orchestrator | 2026-04-02 01:01:20 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:01:20.924855 | orchestrator | 2026-04-02 01:01:20 | INFO  | Task 7ea6187d-bde3-485c-9d5e-be0cd88de8a2 is in state STARTED 2026-04-02 01:01:20.926410 | orchestrator | 2026-04-02 01:01:20 | INFO  | Task 3ca9d9b3-5647-4e0f-8bfb-a42a4b2302c0 is in state STARTED 2026-04-02 01:01:20.928640 | orchestrator | 2026-04-02 01:01:20 | INFO  | Task 0f603ce7-70f2-4409-8168-6467c57cc248 is in state STARTED 2026-04-02 01:01:20.928692 | orchestrator | 2026-04-02 01:01:20 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:01:23.967069 | orchestrator | 2026-04-02 01:01:23 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:01:23.967350 | orchestrator | 2026-04-02 01:01:23 | INFO  | Task 7ea6187d-bde3-485c-9d5e-be0cd88de8a2 is in state STARTED 2026-04-02 01:01:23.968172 | orchestrator | 2026-04-02 01:01:23 | INFO  | Task 3ca9d9b3-5647-4e0f-8bfb-a42a4b2302c0 is in state STARTED 2026-04-02 01:01:23.971191 | orchestrator | 2026-04-02 01:01:23 | INFO  | Task 0f603ce7-70f2-4409-8168-6467c57cc248 is in state STARTED 2026-04-02 01:01:23.971243 | orchestrator | 2026-04-02 01:01:23 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:01:26.998229 | orchestrator | 2026-04-02 01:01:26 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:01:27.002076 | orchestrator | 2026-04-02 01:01:27 | INFO  | Task 7ea6187d-bde3-485c-9d5e-be0cd88de8a2 is in state STARTED 2026-04-02 01:01:27.003741 | orchestrator | 2026-04-02 01:01:27 | INFO  | Task 3ca9d9b3-5647-4e0f-8bfb-a42a4b2302c0 is in state STARTED 2026-04-02 01:01:27.005715 | orchestrator | 2026-04-02 01:01:27 | INFO  | Task 0f603ce7-70f2-4409-8168-6467c57cc248 is in state STARTED 2026-04-02 01:01:27.005783 | orchestrator | 2026-04-02 01:01:27 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:01:30.038604 | orchestrator | 2026-04-02 01:01:30 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:01:30.039317 | orchestrator | 2026-04-02 01:01:30 | INFO  | Task 7ea6187d-bde3-485c-9d5e-be0cd88de8a2 is in state STARTED 2026-04-02 01:01:30.041057 | orchestrator | 2026-04-02 01:01:30 | INFO  | Task 3ca9d9b3-5647-4e0f-8bfb-a42a4b2302c0 is in state STARTED 2026-04-02 01:01:30.044777 | orchestrator | 2026-04-02 01:01:30 | INFO  | Task 0f603ce7-70f2-4409-8168-6467c57cc248 is in state STARTED 2026-04-02 01:01:30.044926 | orchestrator | 2026-04-02 01:01:30 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:01:33.131248 | orchestrator | 2026-04-02 01:01:33 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:01:33.132101 | orchestrator | 2026-04-02 01:01:33 | INFO  | Task 7ea6187d-bde3-485c-9d5e-be0cd88de8a2 is in state STARTED 2026-04-02 01:01:33.133360 | orchestrator | 2026-04-02 01:01:33 | INFO  | Task 3ca9d9b3-5647-4e0f-8bfb-a42a4b2302c0 is in state STARTED 2026-04-02 01:01:33.134109 | orchestrator | 2026-04-02 01:01:33 | INFO  | Task 0f603ce7-70f2-4409-8168-6467c57cc248 is in state STARTED 2026-04-02 01:01:33.135027 | orchestrator | 2026-04-02 01:01:33 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:01:36.158405 | orchestrator | 2026-04-02 01:01:36 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:01:36.159972 | orchestrator | 2026-04-02 01:01:36 | INFO  | Task 7ea6187d-bde3-485c-9d5e-be0cd88de8a2 is in state STARTED 2026-04-02 01:01:36.162544 | orchestrator | 2026-04-02 01:01:36 | INFO  | Task 3ca9d9b3-5647-4e0f-8bfb-a42a4b2302c0 is in state STARTED 2026-04-02 01:01:36.163617 | orchestrator | 2026-04-02 01:01:36 | INFO  | Task 0f603ce7-70f2-4409-8168-6467c57cc248 is in state STARTED 2026-04-02 01:01:36.163655 | orchestrator | 2026-04-02 01:01:36 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:01:39.190188 | orchestrator | 2026-04-02 01:01:39 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:01:39.190672 | orchestrator | 2026-04-02 01:01:39 | INFO  | Task 7ea6187d-bde3-485c-9d5e-be0cd88de8a2 is in state STARTED 2026-04-02 01:01:39.191243 | orchestrator | 2026-04-02 01:01:39 | INFO  | Task 3ca9d9b3-5647-4e0f-8bfb-a42a4b2302c0 is in state STARTED 2026-04-02 01:01:39.192061 | orchestrator | 2026-04-02 01:01:39 | INFO  | Task 0f603ce7-70f2-4409-8168-6467c57cc248 is in state STARTED 2026-04-02 01:01:39.192092 | orchestrator | 2026-04-02 01:01:39 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:01:42.221998 | orchestrator | 2026-04-02 01:01:42 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:01:42.222944 | orchestrator | 2026-04-02 01:01:42 | INFO  | Task 7ea6187d-bde3-485c-9d5e-be0cd88de8a2 is in state STARTED 2026-04-02 01:01:42.224716 | orchestrator | 2026-04-02 01:01:42 | INFO  | Task 3ca9d9b3-5647-4e0f-8bfb-a42a4b2302c0 is in state STARTED 2026-04-02 01:01:42.225363 | orchestrator | 2026-04-02 01:01:42 | INFO  | Task 0f603ce7-70f2-4409-8168-6467c57cc248 is in state STARTED 2026-04-02 01:01:42.225412 | orchestrator | 2026-04-02 01:01:42 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:01:45.353883 | orchestrator | 2026-04-02 01:01:45 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:01:45.355158 | orchestrator | 2026-04-02 01:01:45 | INFO  | Task 7ea6187d-bde3-485c-9d5e-be0cd88de8a2 is in state STARTED 2026-04-02 01:01:45.357077 | orchestrator | 2026-04-02 01:01:45 | INFO  | Task 3ca9d9b3-5647-4e0f-8bfb-a42a4b2302c0 is in state STARTED 2026-04-02 01:01:45.360193 | orchestrator | 2026-04-02 01:01:45 | INFO  | Task 0f603ce7-70f2-4409-8168-6467c57cc248 is in state STARTED 2026-04-02 01:01:45.360317 | orchestrator | 2026-04-02 01:01:45 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:01:48.395071 | orchestrator | 2026-04-02 01:01:48 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:01:48.395870 | orchestrator | 2026-04-02 01:01:48 | INFO  | Task 7ea6187d-bde3-485c-9d5e-be0cd88de8a2 is in state STARTED 2026-04-02 01:01:48.396798 | orchestrator | 2026-04-02 01:01:48 | INFO  | Task 3ca9d9b3-5647-4e0f-8bfb-a42a4b2302c0 is in state STARTED 2026-04-02 01:01:48.397759 | orchestrator | 2026-04-02 01:01:48 | INFO  | Task 0f603ce7-70f2-4409-8168-6467c57cc248 is in state STARTED 2026-04-02 01:01:48.398068 | orchestrator | 2026-04-02 01:01:48 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:01:51.437461 | orchestrator | 2026-04-02 01:01:51 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:01:51.437952 | orchestrator | 2026-04-02 01:01:51 | INFO  | Task 7ea6187d-bde3-485c-9d5e-be0cd88de8a2 is in state STARTED 2026-04-02 01:01:51.439852 | orchestrator | 2026-04-02 01:01:51 | INFO  | Task 3ca9d9b3-5647-4e0f-8bfb-a42a4b2302c0 is in state STARTED 2026-04-02 01:01:51.440665 | orchestrator | 2026-04-02 01:01:51 | INFO  | Task 0f603ce7-70f2-4409-8168-6467c57cc248 is in state STARTED 2026-04-02 01:01:51.440704 | orchestrator | 2026-04-02 01:01:51 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:01:54.470289 | orchestrator | 2026-04-02 01:01:54 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:01:54.470543 | orchestrator | 2026-04-02 01:01:54 | INFO  | Task 7ea6187d-bde3-485c-9d5e-be0cd88de8a2 is in state STARTED 2026-04-02 01:01:54.472172 | orchestrator | 2026-04-02 01:01:54 | INFO  | Task 3ca9d9b3-5647-4e0f-8bfb-a42a4b2302c0 is in state STARTED 2026-04-02 01:01:54.472617 | orchestrator | 2026-04-02 01:01:54 | INFO  | Task 0f603ce7-70f2-4409-8168-6467c57cc248 is in state STARTED 2026-04-02 01:01:54.472826 | orchestrator | 2026-04-02 01:01:54 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:01:57.500008 | orchestrator | 2026-04-02 01:01:57 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:01:57.500995 | orchestrator | 2026-04-02 01:01:57 | INFO  | Task 7ea6187d-bde3-485c-9d5e-be0cd88de8a2 is in state STARTED 2026-04-02 01:01:57.501582 | orchestrator | 2026-04-02 01:01:57 | INFO  | Task 3ca9d9b3-5647-4e0f-8bfb-a42a4b2302c0 is in state STARTED 2026-04-02 01:01:57.502667 | orchestrator | 2026-04-02 01:01:57 | INFO  | Task 0f603ce7-70f2-4409-8168-6467c57cc248 is in state STARTED 2026-04-02 01:01:57.503226 | orchestrator | 2026-04-02 01:01:57 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:02:00.529053 | orchestrator | 2026-04-02 01:02:00 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:02:00.529247 | orchestrator | 2026-04-02 01:02:00 | INFO  | Task 7ea6187d-bde3-485c-9d5e-be0cd88de8a2 is in state STARTED 2026-04-02 01:02:00.530638 | orchestrator | 2026-04-02 01:02:00 | INFO  | Task 3ca9d9b3-5647-4e0f-8bfb-a42a4b2302c0 is in state STARTED 2026-04-02 01:02:00.531264 | orchestrator | 2026-04-02 01:02:00 | INFO  | Task 0f603ce7-70f2-4409-8168-6467c57cc248 is in state STARTED 2026-04-02 01:02:00.531301 | orchestrator | 2026-04-02 01:02:00 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:02:03.575308 | orchestrator | 2026-04-02 01:02:03 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:02:03.575405 | orchestrator | 2026-04-02 01:02:03 | INFO  | Task 7ea6187d-bde3-485c-9d5e-be0cd88de8a2 is in state STARTED 2026-04-02 01:02:03.576101 | orchestrator | 2026-04-02 01:02:03 | INFO  | Task 3ca9d9b3-5647-4e0f-8bfb-a42a4b2302c0 is in state STARTED 2026-04-02 01:02:03.577536 | orchestrator | 2026-04-02 01:02:03 | INFO  | Task 0f603ce7-70f2-4409-8168-6467c57cc248 is in state STARTED 2026-04-02 01:02:03.577595 | orchestrator | 2026-04-02 01:02:03 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:02:06.605111 | orchestrator | 2026-04-02 01:02:06 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:02:06.605338 | orchestrator | 2026-04-02 01:02:06 | INFO  | Task 7ea6187d-bde3-485c-9d5e-be0cd88de8a2 is in state STARTED 2026-04-02 01:02:06.606079 | orchestrator | 2026-04-02 01:02:06 | INFO  | Task 3ca9d9b3-5647-4e0f-8bfb-a42a4b2302c0 is in state STARTED 2026-04-02 01:02:06.606568 | orchestrator | 2026-04-02 01:02:06 | INFO  | Task 0f603ce7-70f2-4409-8168-6467c57cc248 is in state STARTED 2026-04-02 01:02:06.607111 | orchestrator | 2026-04-02 01:02:06 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:02:09.632408 | orchestrator | 2026-04-02 01:02:09 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:02:09.632571 | orchestrator | 2026-04-02 01:02:09 | INFO  | Task 7ea6187d-bde3-485c-9d5e-be0cd88de8a2 is in state STARTED 2026-04-02 01:02:09.633027 | orchestrator | 2026-04-02 01:02:09 | INFO  | Task 3ca9d9b3-5647-4e0f-8bfb-a42a4b2302c0 is in state STARTED 2026-04-02 01:02:09.633543 | orchestrator | 2026-04-02 01:02:09 | INFO  | Task 0f603ce7-70f2-4409-8168-6467c57cc248 is in state STARTED 2026-04-02 01:02:09.633642 | orchestrator | 2026-04-02 01:02:09 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:02:12.665127 | orchestrator | 2026-04-02 01:02:12 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:02:12.665307 | orchestrator | 2026-04-02 01:02:12 | INFO  | Task 7ea6187d-bde3-485c-9d5e-be0cd88de8a2 is in state STARTED 2026-04-02 01:02:12.665999 | orchestrator | 2026-04-02 01:02:12 | INFO  | Task 3ca9d9b3-5647-4e0f-8bfb-a42a4b2302c0 is in state STARTED 2026-04-02 01:02:12.666447 | orchestrator | 2026-04-02 01:02:12 | INFO  | Task 0f603ce7-70f2-4409-8168-6467c57cc248 is in state STARTED 2026-04-02 01:02:12.666472 | orchestrator | 2026-04-02 01:02:12 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:02:15.732476 | orchestrator | 2026-04-02 01:02:15 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:02:15.732650 | orchestrator | 2026-04-02 01:02:15 | INFO  | Task 7ea6187d-bde3-485c-9d5e-be0cd88de8a2 is in state STARTED 2026-04-02 01:02:15.734944 | orchestrator | 2026-04-02 01:02:15 | INFO  | Task 3ca9d9b3-5647-4e0f-8bfb-a42a4b2302c0 is in state STARTED 2026-04-02 01:02:15.735566 | orchestrator | 2026-04-02 01:02:15 | INFO  | Task 0f603ce7-70f2-4409-8168-6467c57cc248 is in state STARTED 2026-04-02 01:02:15.735627 | orchestrator | 2026-04-02 01:02:15 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:02:18.755673 | orchestrator | 2026-04-02 01:02:18 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:02:18.758391 | orchestrator | 2026-04-02 01:02:18 | INFO  | Task 7ea6187d-bde3-485c-9d5e-be0cd88de8a2 is in state STARTED 2026-04-02 01:02:18.758470 | orchestrator | 2026-04-02 01:02:18 | INFO  | Task 3ca9d9b3-5647-4e0f-8bfb-a42a4b2302c0 is in state STARTED 2026-04-02 01:02:18.759660 | orchestrator | 2026-04-02 01:02:18 | INFO  | Task 0f603ce7-70f2-4409-8168-6467c57cc248 is in state STARTED 2026-04-02 01:02:18.759711 | orchestrator | 2026-04-02 01:02:18 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:02:21.816350 | orchestrator | 2026-04-02 01:02:21 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:02:21.816445 | orchestrator | 2026-04-02 01:02:21 | INFO  | Task 7ea6187d-bde3-485c-9d5e-be0cd88de8a2 is in state STARTED 2026-04-02 01:02:21.816467 | orchestrator | 2026-04-02 01:02:21 | INFO  | Task 3ca9d9b3-5647-4e0f-8bfb-a42a4b2302c0 is in state STARTED 2026-04-02 01:02:21.817400 | orchestrator | 2026-04-02 01:02:21 | INFO  | Task 0f603ce7-70f2-4409-8168-6467c57cc248 is in state STARTED 2026-04-02 01:02:21.817441 | orchestrator | 2026-04-02 01:02:21 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:02:24.845959 | orchestrator | 2026-04-02 01:02:24 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:02:24.846413 | orchestrator | 2026-04-02 01:02:24 | INFO  | Task 7ea6187d-bde3-485c-9d5e-be0cd88de8a2 is in state STARTED 2026-04-02 01:02:24.847412 | orchestrator | 2026-04-02 01:02:24 | INFO  | Task 3ca9d9b3-5647-4e0f-8bfb-a42a4b2302c0 is in state STARTED 2026-04-02 01:02:24.848073 | orchestrator | 2026-04-02 01:02:24 | INFO  | Task 0f603ce7-70f2-4409-8168-6467c57cc248 is in state STARTED 2026-04-02 01:02:24.848198 | orchestrator | 2026-04-02 01:02:24 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:02:27.886408 | orchestrator | 2026-04-02 01:02:27 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:02:27.886963 | orchestrator | 2026-04-02 01:02:27 | INFO  | Task 7ea6187d-bde3-485c-9d5e-be0cd88de8a2 is in state STARTED 2026-04-02 01:02:27.887668 | orchestrator | 2026-04-02 01:02:27 | INFO  | Task 3ca9d9b3-5647-4e0f-8bfb-a42a4b2302c0 is in state STARTED 2026-04-02 01:02:27.888692 | orchestrator | 2026-04-02 01:02:27 | INFO  | Task 0f603ce7-70f2-4409-8168-6467c57cc248 is in state STARTED 2026-04-02 01:02:27.888739 | orchestrator | 2026-04-02 01:02:27 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:02:30.912252 | orchestrator | 2026-04-02 01:02:30 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:02:30.912745 | orchestrator | 2026-04-02 01:02:30 | INFO  | Task 7ea6187d-bde3-485c-9d5e-be0cd88de8a2 is in state STARTED 2026-04-02 01:02:30.913497 | orchestrator | 2026-04-02 01:02:30 | INFO  | Task 3ca9d9b3-5647-4e0f-8bfb-a42a4b2302c0 is in state STARTED 2026-04-02 01:02:30.913952 | orchestrator | 2026-04-02 01:02:30 | INFO  | Task 0f603ce7-70f2-4409-8168-6467c57cc248 is in state STARTED 2026-04-02 01:02:30.914057 | orchestrator | 2026-04-02 01:02:30 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:02:33.937542 | orchestrator | 2026-04-02 01:02:33 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:02:33.937866 | orchestrator | 2026-04-02 01:02:33 | INFO  | Task 7ea6187d-bde3-485c-9d5e-be0cd88de8a2 is in state STARTED 2026-04-02 01:02:33.938413 | orchestrator | 2026-04-02 01:02:33 | INFO  | Task 3ca9d9b3-5647-4e0f-8bfb-a42a4b2302c0 is in state STARTED 2026-04-02 01:02:33.939027 | orchestrator | 2026-04-02 01:02:33 | INFO  | Task 0f603ce7-70f2-4409-8168-6467c57cc248 is in state STARTED 2026-04-02 01:02:33.939122 | orchestrator | 2026-04-02 01:02:33 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:02:36.967564 | orchestrator | 2026-04-02 01:02:36 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:02:36.967686 | orchestrator | 2026-04-02 01:02:36 | INFO  | Task 7ea6187d-bde3-485c-9d5e-be0cd88de8a2 is in state STARTED 2026-04-02 01:02:36.968387 | orchestrator | 2026-04-02 01:02:36 | INFO  | Task 3ca9d9b3-5647-4e0f-8bfb-a42a4b2302c0 is in state STARTED 2026-04-02 01:02:36.969109 | orchestrator | 2026-04-02 01:02:36 | INFO  | Task 0f603ce7-70f2-4409-8168-6467c57cc248 is in state STARTED 2026-04-02 01:02:36.969153 | orchestrator | 2026-04-02 01:02:36 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:02:40.003521 | orchestrator | 2026-04-02 01:02:40 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:02:40.003867 | orchestrator | 2026-04-02 01:02:40 | INFO  | Task 7ea6187d-bde3-485c-9d5e-be0cd88de8a2 is in state STARTED 2026-04-02 01:02:40.004304 | orchestrator | 2026-04-02 01:02:40 | INFO  | Task 3ca9d9b3-5647-4e0f-8bfb-a42a4b2302c0 is in state STARTED 2026-04-02 01:02:40.004842 | orchestrator | 2026-04-02 01:02:40 | INFO  | Task 0f603ce7-70f2-4409-8168-6467c57cc248 is in state STARTED 2026-04-02 01:02:40.004867 | orchestrator | 2026-04-02 01:02:40 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:02:43.028429 | orchestrator | 2026-04-02 01:02:43 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:02:43.028546 | orchestrator | 2026-04-02 01:02:43 | INFO  | Task 7ea6187d-bde3-485c-9d5e-be0cd88de8a2 is in state STARTED 2026-04-02 01:02:43.029151 | orchestrator | 2026-04-02 01:02:43 | INFO  | Task 3ca9d9b3-5647-4e0f-8bfb-a42a4b2302c0 is in state STARTED 2026-04-02 01:02:43.029592 | orchestrator | 2026-04-02 01:02:43 | INFO  | Task 0f603ce7-70f2-4409-8168-6467c57cc248 is in state STARTED 2026-04-02 01:02:43.029636 | orchestrator | 2026-04-02 01:02:43 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:02:46.064481 | orchestrator | 2026-04-02 01:02:46 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:02:46.066915 | orchestrator | 2026-04-02 01:02:46 | INFO  | Task 7ea6187d-bde3-485c-9d5e-be0cd88de8a2 is in state STARTED 2026-04-02 01:02:46.068501 | orchestrator | 2026-04-02 01:02:46 | INFO  | Task 3ca9d9b3-5647-4e0f-8bfb-a42a4b2302c0 is in state STARTED 2026-04-02 01:02:46.069847 | orchestrator | 2026-04-02 01:02:46 | INFO  | Task 0f603ce7-70f2-4409-8168-6467c57cc248 is in state STARTED 2026-04-02 01:02:46.069892 | orchestrator | 2026-04-02 01:02:46 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:02:49.114428 | orchestrator | 2026-04-02 01:02:49 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:02:49.116354 | orchestrator | 2026-04-02 01:02:49 | INFO  | Task 7ea6187d-bde3-485c-9d5e-be0cd88de8a2 is in state STARTED 2026-04-02 01:02:49.117990 | orchestrator | 2026-04-02 01:02:49 | INFO  | Task 3ca9d9b3-5647-4e0f-8bfb-a42a4b2302c0 is in state STARTED 2026-04-02 01:02:49.119155 | orchestrator | 2026-04-02 01:02:49 | INFO  | Task 0f603ce7-70f2-4409-8168-6467c57cc248 is in state STARTED 2026-04-02 01:02:49.119193 | orchestrator | 2026-04-02 01:02:49 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:02:52.160420 | orchestrator | 2026-04-02 01:02:52 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:02:52.161760 | orchestrator | 2026-04-02 01:02:52 | INFO  | Task 7ea6187d-bde3-485c-9d5e-be0cd88de8a2 is in state STARTED 2026-04-02 01:02:52.163304 | orchestrator | 2026-04-02 01:02:52 | INFO  | Task 3ca9d9b3-5647-4e0f-8bfb-a42a4b2302c0 is in state STARTED 2026-04-02 01:02:52.164940 | orchestrator | 2026-04-02 01:02:52 | INFO  | Task 0f603ce7-70f2-4409-8168-6467c57cc248 is in state STARTED 2026-04-02 01:02:52.164981 | orchestrator | 2026-04-02 01:02:52 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:02:55.204955 | orchestrator | 2026-04-02 01:02:55 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:02:55.206381 | orchestrator | 2026-04-02 01:02:55 | INFO  | Task d339e0ca-3ffa-4e8d-8491-33e0b1681f45 is in state STARTED 2026-04-02 01:02:55.206533 | orchestrator | 2026-04-02 01:02:55 | INFO  | Task 7ea6187d-bde3-485c-9d5e-be0cd88de8a2 is in state STARTED 2026-04-02 01:02:55.207325 | orchestrator | 2026-04-02 01:02:55 | INFO  | Task 3ca9d9b3-5647-4e0f-8bfb-a42a4b2302c0 is in state STARTED 2026-04-02 01:02:55.211947 | orchestrator | 2026-04-02 01:02:55.212007 | orchestrator | 2026-04-02 01:02:55.212020 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-02 01:02:55.212032 | orchestrator | 2026-04-02 01:02:55.212042 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-02 01:02:55.212053 | orchestrator | Thursday 02 April 2026 01:00:51 +0000 (0:00:00.381) 0:00:00.381 ******** 2026-04-02 01:02:55.212064 | orchestrator | ok: [testbed-node-0] 2026-04-02 01:02:55.212075 | orchestrator | ok: [testbed-node-1] 2026-04-02 01:02:55.212086 | orchestrator | ok: [testbed-node-2] 2026-04-02 01:02:55.212096 | orchestrator | 2026-04-02 01:02:55.212106 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-02 01:02:55.212116 | orchestrator | Thursday 02 April 2026 01:00:51 +0000 (0:00:00.480) 0:00:00.861 ******** 2026-04-02 01:02:55.212126 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-04-02 01:02:55.212137 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-04-02 01:02:55.212240 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-04-02 01:02:55.212255 | orchestrator | 2026-04-02 01:02:55.212265 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-04-02 01:02:55.212274 | orchestrator | 2026-04-02 01:02:55.212282 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-04-02 01:02:55.212562 | orchestrator | Thursday 02 April 2026 01:00:51 +0000 (0:00:00.337) 0:00:01.198 ******** 2026-04-02 01:02:55.212577 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 01:02:55.212584 | orchestrator | 2026-04-02 01:02:55.212590 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2026-04-02 01:02:55.212595 | orchestrator | Thursday 02 April 2026 01:00:52 +0000 (0:00:00.625) 0:00:01.824 ******** 2026-04-02 01:02:55.212602 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2026-04-02 01:02:55.212608 | orchestrator | 2026-04-02 01:02:55.212614 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2026-04-02 01:02:55.212619 | orchestrator | Thursday 02 April 2026 01:00:56 +0000 (0:00:03.744) 0:00:05.568 ******** 2026-04-02 01:02:55.212625 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2026-04-02 01:02:55.212631 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2026-04-02 01:02:55.212637 | orchestrator | 2026-04-02 01:02:55.212643 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2026-04-02 01:02:55.212648 | orchestrator | Thursday 02 April 2026 01:01:02 +0000 (0:00:06.265) 0:00:11.834 ******** 2026-04-02 01:02:55.212654 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-02 01:02:55.212660 | orchestrator | 2026-04-02 01:02:55.212666 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2026-04-02 01:02:55.212688 | orchestrator | Thursday 02 April 2026 01:01:06 +0000 (0:00:03.737) 0:00:15.572 ******** 2026-04-02 01:02:55.212698 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2026-04-02 01:02:55.212708 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-02 01:02:55.212719 | orchestrator | 2026-04-02 01:02:55.212728 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2026-04-02 01:02:55.212738 | orchestrator | Thursday 02 April 2026 01:01:10 +0000 (0:00:04.381) 0:00:19.954 ******** 2026-04-02 01:02:55.212747 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-02 01:02:55.212758 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2026-04-02 01:02:55.212768 | orchestrator | changed: [testbed-node-0] => (item=creator) 2026-04-02 01:02:55.212777 | orchestrator | changed: [testbed-node-0] => (item=observer) 2026-04-02 01:02:55.212786 | orchestrator | changed: [testbed-node-0] => (item=audit) 2026-04-02 01:02:55.212791 | orchestrator | 2026-04-02 01:02:55.212797 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2026-04-02 01:02:55.212803 | orchestrator | Thursday 02 April 2026 01:01:28 +0000 (0:00:17.592) 0:00:37.546 ******** 2026-04-02 01:02:55.212809 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2026-04-02 01:02:55.212815 | orchestrator | 2026-04-02 01:02:55.212821 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-04-02 01:02:55.212826 | orchestrator | Thursday 02 April 2026 01:01:33 +0000 (0:00:05.079) 0:00:42.626 ******** 2026-04-02 01:02:55.212844 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-02 01:02:55.212872 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-02 01:02:55.212879 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-02 01:02:55.212886 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-02 01:02:55.212892 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-02 01:02:55.212902 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-02 01:02:55.212919 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-02 01:02:55.212926 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-02 01:02:55.212932 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-02 01:02:55.212939 | orchestrator | 2026-04-02 01:02:55.212945 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-04-02 01:02:55.212951 | orchestrator | Thursday 02 April 2026 01:01:36 +0000 (0:00:02.970) 0:00:45.596 ******** 2026-04-02 01:02:55.212957 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2026-04-02 01:02:55.212963 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2026-04-02 01:02:55.212968 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2026-04-02 01:02:55.212974 | orchestrator | 2026-04-02 01:02:55.212980 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-04-02 01:02:55.212986 | orchestrator | Thursday 02 April 2026 01:01:37 +0000 (0:00:01.437) 0:00:47.034 ******** 2026-04-02 01:02:55.212992 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:02:55.212998 | orchestrator | 2026-04-02 01:02:55.213003 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-04-02 01:02:55.213009 | orchestrator | Thursday 02 April 2026 01:01:37 +0000 (0:00:00.175) 0:00:47.209 ******** 2026-04-02 01:02:55.213015 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:02:55.213021 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:02:55.213026 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:02:55.213032 | orchestrator | 2026-04-02 01:02:55.213038 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-04-02 01:02:55.213044 | orchestrator | Thursday 02 April 2026 01:01:38 +0000 (0:00:00.335) 0:00:47.545 ******** 2026-04-02 01:02:55.213050 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 01:02:55.213056 | orchestrator | 2026-04-02 01:02:55.213061 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-04-02 01:02:55.213067 | orchestrator | Thursday 02 April 2026 01:01:38 +0000 (0:00:00.532) 0:00:48.078 ******** 2026-04-02 01:02:55.213080 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-02 01:02:55.213091 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-02 01:02:55.213098 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-02 01:02:55.213104 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-02 01:02:55.213111 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-02 01:02:55.213124 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-02 01:02:55.213131 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-02 01:02:55.213141 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-02 01:02:55.213147 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-02 01:02:55.213153 | orchestrator | 2026-04-02 01:02:55.213159 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-04-02 01:02:55.213165 | orchestrator | Thursday 02 April 2026 01:01:43 +0000 (0:00:04.383) 0:00:52.461 ******** 2026-04-02 01:02:55.213172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-02 01:02:55.213179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-02 01:02:55.213193 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-02 01:02:55.213201 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:02:55.213215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-02 01:02:55.213231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-02 01:02:55.213244 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-02 01:02:55.213253 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:02:55.213263 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-02 01:02:55.213279 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-02 01:02:55.213293 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-02 01:02:55.213301 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:02:55.213311 | orchestrator | 2026-04-02 01:02:55.213320 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-04-02 01:02:55.213330 | orchestrator | Thursday 02 April 2026 01:01:44 +0000 (0:00:01.334) 0:00:53.795 ******** 2026-04-02 01:02:55.213346 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-02 01:02:55.213358 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-02 01:02:55.213370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-02 01:02:55.213386 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:02:55.213394 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-02 01:02:55.213408 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-02 01:02:55.213416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-02 01:02:55.213423 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:02:55.213435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-02 01:02:55.213442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-02 01:02:55.213455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-02 01:02:55.213462 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:02:55.213469 | orchestrator | 2026-04-02 01:02:55.213475 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-04-02 01:02:55.213482 | orchestrator | Thursday 02 April 2026 01:01:45 +0000 (0:00:00.842) 0:00:54.638 ******** 2026-04-02 01:02:55.213492 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-02 01:02:55.213503 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-02 01:02:55.213511 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-02 01:02:55.213519 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-02 01:02:55.213529 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-02 01:02:55.213536 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-02 01:02:55.213544 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-02 01:02:55.213553 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-02 01:02:55.213560 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-02 01:02:55.213566 | orchestrator | 2026-04-02 01:02:55.213572 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-04-02 01:02:55.213578 | orchestrator | Thursday 02 April 2026 01:01:49 +0000 (0:00:03.815) 0:00:58.453 ******** 2026-04-02 01:02:55.213584 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:02:55.213590 | orchestrator | changed: [testbed-node-1] 2026-04-02 01:02:55.213596 | orchestrator | changed: [testbed-node-2] 2026-04-02 01:02:55.213601 | orchestrator | 2026-04-02 01:02:55.213611 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-04-02 01:02:55.213617 | orchestrator | Thursday 02 April 2026 01:01:51 +0000 (0:00:02.386) 0:01:00.840 ******** 2026-04-02 01:02:55.213623 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-02 01:02:55.213630 | orchestrator | 2026-04-02 01:02:55.213640 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-04-02 01:02:55.213653 | orchestrator | Thursday 02 April 2026 01:01:52 +0000 (0:00:01.255) 0:01:02.095 ******** 2026-04-02 01:02:55.213665 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:02:55.213721 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:02:55.213733 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:02:55.213743 | orchestrator | 2026-04-02 01:02:55.213754 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-04-02 01:02:55.213765 | orchestrator | Thursday 02 April 2026 01:01:53 +0000 (0:00:01.092) 0:01:03.188 ******** 2026-04-02 01:02:55.213777 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-02 01:02:55.213794 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-02 01:02:55.213812 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-02 01:02:55.213819 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-02 01:02:55.213832 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-02 01:02:55.213838 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-02 01:02:55.213844 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-02 01:02:55.213854 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-02 01:02:55.213860 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-02 01:02:55.213866 | orchestrator | 2026-04-02 01:02:55.213872 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-04-02 01:02:55.213878 | orchestrator | Thursday 02 April 2026 01:02:04 +0000 (0:00:10.741) 0:01:13.929 ******** 2026-04-02 01:02:55.213888 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-02 01:02:55.213898 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-02 01:02:55.213904 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-02 01:02:55.213910 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:02:55.213917 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-02 01:02:55.213925 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-02 01:02:55.213936 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-02 01:02:55.213946 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:02:55.213952 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-02 01:02:55.213958 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-02 01:02:55.213965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-02 01:02:55.213971 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:02:55.213976 | orchestrator | 2026-04-02 01:02:55.213982 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2026-04-02 01:02:55.213988 | orchestrator | Thursday 02 April 2026 01:02:05 +0000 (0:00:01.192) 0:01:15.122 ******** 2026-04-02 01:02:55.213997 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-02 01:02:55.214008 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-02 01:02:55.214054 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-02 01:02:55.214061 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-02 01:02:55.214067 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-02 01:02:55.214076 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-02 01:02:55.214083 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-02 01:02:55.214100 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-02 01:02:55.214106 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-02 01:02:55.214112 | orchestrator | 2026-04-02 01:02:55.214118 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-04-02 01:02:55.214124 | orchestrator | Thursday 02 April 2026 01:02:09 +0000 (0:00:03.714) 0:01:18.836 ******** 2026-04-02 01:02:55.214130 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:02:55.214136 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:02:55.214141 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:02:55.214147 | orchestrator | 2026-04-02 01:02:55.214153 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2026-04-02 01:02:55.214159 | orchestrator | Thursday 02 April 2026 01:02:10 +0000 (0:00:01.010) 0:01:19.849 ******** 2026-04-02 01:02:55.214164 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:02:55.214170 | orchestrator | 2026-04-02 01:02:55.214176 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2026-04-02 01:02:55.214182 | orchestrator | Thursday 02 April 2026 01:02:13 +0000 (0:00:02.707) 0:01:22.558 ******** 2026-04-02 01:02:55.214187 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:02:55.214193 | orchestrator | 2026-04-02 01:02:55.214199 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-04-02 01:02:55.214205 | orchestrator | Thursday 02 April 2026 01:02:16 +0000 (0:00:02.762) 0:01:25.321 ******** 2026-04-02 01:02:55.214211 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:02:55.214216 | orchestrator | 2026-04-02 01:02:55.214222 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-04-02 01:02:55.214228 | orchestrator | Thursday 02 April 2026 01:02:29 +0000 (0:00:13.085) 0:01:38.406 ******** 2026-04-02 01:02:55.214234 | orchestrator | 2026-04-02 01:02:55.214239 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-04-02 01:02:55.214245 | orchestrator | Thursday 02 April 2026 01:02:29 +0000 (0:00:00.438) 0:01:38.845 ******** 2026-04-02 01:02:55.214251 | orchestrator | 2026-04-02 01:02:55.214256 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-04-02 01:02:55.214262 | orchestrator | Thursday 02 April 2026 01:02:29 +0000 (0:00:00.157) 0:01:39.002 ******** 2026-04-02 01:02:55.214268 | orchestrator | 2026-04-02 01:02:55.214274 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-04-02 01:02:55.214280 | orchestrator | Thursday 02 April 2026 01:02:29 +0000 (0:00:00.196) 0:01:39.199 ******** 2026-04-02 01:02:55.214285 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:02:55.214291 | orchestrator | changed: [testbed-node-1] 2026-04-02 01:02:55.214297 | orchestrator | changed: [testbed-node-2] 2026-04-02 01:02:55.214303 | orchestrator | 2026-04-02 01:02:55.214313 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-04-02 01:02:55.214319 | orchestrator | Thursday 02 April 2026 01:02:36 +0000 (0:00:06.228) 0:01:45.427 ******** 2026-04-02 01:02:55.214324 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:02:55.214333 | orchestrator | changed: [testbed-node-2] 2026-04-02 01:02:55.214347 | orchestrator | changed: [testbed-node-1] 2026-04-02 01:02:55.214359 | orchestrator | 2026-04-02 01:02:55.214369 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-04-02 01:02:55.214379 | orchestrator | Thursday 02 April 2026 01:02:41 +0000 (0:00:05.604) 0:01:51.032 ******** 2026-04-02 01:02:55.214388 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:02:55.214397 | orchestrator | changed: [testbed-node-1] 2026-04-02 01:02:55.214406 | orchestrator | changed: [testbed-node-2] 2026-04-02 01:02:55.214415 | orchestrator | 2026-04-02 01:02:55.214431 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-02 01:02:55.214442 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-02 01:02:55.214453 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-02 01:02:55.214462 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-02 01:02:55.214472 | orchestrator | 2026-04-02 01:02:55.214483 | orchestrator | 2026-04-02 01:02:55.214494 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-02 01:02:55.214504 | orchestrator | Thursday 02 April 2026 01:02:52 +0000 (0:00:10.922) 0:02:01.955 ******** 2026-04-02 01:02:55.214515 | orchestrator | =============================================================================== 2026-04-02 01:02:55.214522 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 17.59s 2026-04-02 01:02:55.214533 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 13.09s 2026-04-02 01:02:55.214539 | orchestrator | barbican : Restart barbican-worker container --------------------------- 10.92s 2026-04-02 01:02:55.214545 | orchestrator | barbican : Copying over barbican.conf ---------------------------------- 10.74s 2026-04-02 01:02:55.214551 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.27s 2026-04-02 01:02:55.214557 | orchestrator | barbican : Restart barbican-api container ------------------------------- 6.23s 2026-04-02 01:02:55.214563 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 5.60s 2026-04-02 01:02:55.214573 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 5.08s 2026-04-02 01:02:55.214582 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 4.38s 2026-04-02 01:02:55.214591 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.38s 2026-04-02 01:02:55.214599 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.82s 2026-04-02 01:02:55.214607 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.74s 2026-04-02 01:02:55.214617 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.74s 2026-04-02 01:02:55.214626 | orchestrator | barbican : Check barbican containers ------------------------------------ 3.71s 2026-04-02 01:02:55.214635 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.97s 2026-04-02 01:02:55.214645 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.76s 2026-04-02 01:02:55.214654 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.71s 2026-04-02 01:02:55.214663 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.39s 2026-04-02 01:02:55.214732 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 1.44s 2026-04-02 01:02:55.214744 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS certificate --- 1.33s 2026-04-02 01:02:55.214763 | orchestrator | 2026-04-02 01:02:55 | INFO  | Task 0f603ce7-70f2-4409-8168-6467c57cc248 is in state SUCCESS 2026-04-02 01:02:55.214773 | orchestrator | 2026-04-02 01:02:55 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:02:58.259161 | orchestrator | 2026-04-02 01:02:58 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:02:58.260580 | orchestrator | 2026-04-02 01:02:58 | INFO  | Task d339e0ca-3ffa-4e8d-8491-33e0b1681f45 is in state STARTED 2026-04-02 01:02:58.262219 | orchestrator | 2026-04-02 01:02:58 | INFO  | Task 7ea6187d-bde3-485c-9d5e-be0cd88de8a2 is in state STARTED 2026-04-02 01:02:58.263735 | orchestrator | 2026-04-02 01:02:58 | INFO  | Task 3ca9d9b3-5647-4e0f-8bfb-a42a4b2302c0 is in state STARTED 2026-04-02 01:02:58.263769 | orchestrator | 2026-04-02 01:02:58 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:03:01.306312 | orchestrator | 2026-04-02 01:03:01 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:03:01.307637 | orchestrator | 2026-04-02 01:03:01 | INFO  | Task d339e0ca-3ffa-4e8d-8491-33e0b1681f45 is in state STARTED 2026-04-02 01:03:01.310430 | orchestrator | 2026-04-02 01:03:01 | INFO  | Task 7ea6187d-bde3-485c-9d5e-be0cd88de8a2 is in state STARTED 2026-04-02 01:03:01.311865 | orchestrator | 2026-04-02 01:03:01 | INFO  | Task 3ca9d9b3-5647-4e0f-8bfb-a42a4b2302c0 is in state STARTED 2026-04-02 01:03:01.311898 | orchestrator | 2026-04-02 01:03:01 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:03:04.345848 | orchestrator | 2026-04-02 01:03:04 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:03:04.346341 | orchestrator | 2026-04-02 01:03:04 | INFO  | Task d339e0ca-3ffa-4e8d-8491-33e0b1681f45 is in state STARTED 2026-04-02 01:03:04.347130 | orchestrator | 2026-04-02 01:03:04 | INFO  | Task 7ea6187d-bde3-485c-9d5e-be0cd88de8a2 is in state STARTED 2026-04-02 01:03:04.348127 | orchestrator | 2026-04-02 01:03:04 | INFO  | Task 3ca9d9b3-5647-4e0f-8bfb-a42a4b2302c0 is in state STARTED 2026-04-02 01:03:04.348173 | orchestrator | 2026-04-02 01:03:04 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:03:07.396947 | orchestrator | 2026-04-02 01:03:07 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:03:07.399255 | orchestrator | 2026-04-02 01:03:07 | INFO  | Task d339e0ca-3ffa-4e8d-8491-33e0b1681f45 is in state STARTED 2026-04-02 01:03:07.401250 | orchestrator | 2026-04-02 01:03:07 | INFO  | Task 7ea6187d-bde3-485c-9d5e-be0cd88de8a2 is in state STARTED 2026-04-02 01:03:07.403371 | orchestrator | 2026-04-02 01:03:07 | INFO  | Task 3ca9d9b3-5647-4e0f-8bfb-a42a4b2302c0 is in state STARTED 2026-04-02 01:03:07.403438 | orchestrator | 2026-04-02 01:03:07 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:03:10.437064 | orchestrator | 2026-04-02 01:03:10 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:03:10.437662 | orchestrator | 2026-04-02 01:03:10 | INFO  | Task d339e0ca-3ffa-4e8d-8491-33e0b1681f45 is in state STARTED 2026-04-02 01:03:10.438551 | orchestrator | 2026-04-02 01:03:10 | INFO  | Task 7ea6187d-bde3-485c-9d5e-be0cd88de8a2 is in state STARTED 2026-04-02 01:03:10.439390 | orchestrator | 2026-04-02 01:03:10 | INFO  | Task 3ca9d9b3-5647-4e0f-8bfb-a42a4b2302c0 is in state STARTED 2026-04-02 01:03:10.439496 | orchestrator | 2026-04-02 01:03:10 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:03:13.486097 | orchestrator | 2026-04-02 01:03:13 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:03:13.486600 | orchestrator | 2026-04-02 01:03:13 | INFO  | Task d339e0ca-3ffa-4e8d-8491-33e0b1681f45 is in state STARTED 2026-04-02 01:03:13.487526 | orchestrator | 2026-04-02 01:03:13 | INFO  | Task 7ea6187d-bde3-485c-9d5e-be0cd88de8a2 is in state STARTED 2026-04-02 01:03:13.488409 | orchestrator | 2026-04-02 01:03:13 | INFO  | Task 3ca9d9b3-5647-4e0f-8bfb-a42a4b2302c0 is in state STARTED 2026-04-02 01:03:13.488430 | orchestrator | 2026-04-02 01:03:13 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:03:16.529222 | orchestrator | 2026-04-02 01:03:16 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:03:16.530896 | orchestrator | 2026-04-02 01:03:16 | INFO  | Task d339e0ca-3ffa-4e8d-8491-33e0b1681f45 is in state STARTED 2026-04-02 01:03:16.531698 | orchestrator | 2026-04-02 01:03:16 | INFO  | Task 7ea6187d-bde3-485c-9d5e-be0cd88de8a2 is in state STARTED 2026-04-02 01:03:16.533429 | orchestrator | 2026-04-02 01:03:16 | INFO  | Task 3ca9d9b3-5647-4e0f-8bfb-a42a4b2302c0 is in state STARTED 2026-04-02 01:03:16.533469 | orchestrator | 2026-04-02 01:03:16 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:03:19.567850 | orchestrator | 2026-04-02 01:03:19 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:03:19.568045 | orchestrator | 2026-04-02 01:03:19 | INFO  | Task d339e0ca-3ffa-4e8d-8491-33e0b1681f45 is in state STARTED 2026-04-02 01:03:19.569076 | orchestrator | 2026-04-02 01:03:19 | INFO  | Task 7ea6187d-bde3-485c-9d5e-be0cd88de8a2 is in state STARTED 2026-04-02 01:03:19.570001 | orchestrator | 2026-04-02 01:03:19 | INFO  | Task 3ca9d9b3-5647-4e0f-8bfb-a42a4b2302c0 is in state STARTED 2026-04-02 01:03:19.572307 | orchestrator | 2026-04-02 01:03:19 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:03:22.612709 | orchestrator | 2026-04-02 01:03:22 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:03:22.615325 | orchestrator | 2026-04-02 01:03:22 | INFO  | Task d339e0ca-3ffa-4e8d-8491-33e0b1681f45 is in state STARTED 2026-04-02 01:03:22.617233 | orchestrator | 2026-04-02 01:03:22 | INFO  | Task 7ea6187d-bde3-485c-9d5e-be0cd88de8a2 is in state STARTED 2026-04-02 01:03:22.618850 | orchestrator | 2026-04-02 01:03:22 | INFO  | Task 3ca9d9b3-5647-4e0f-8bfb-a42a4b2302c0 is in state STARTED 2026-04-02 01:03:22.618887 | orchestrator | 2026-04-02 01:03:22 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:03:25.656255 | orchestrator | 2026-04-02 01:03:25 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:03:25.656665 | orchestrator | 2026-04-02 01:03:25 | INFO  | Task d339e0ca-3ffa-4e8d-8491-33e0b1681f45 is in state STARTED 2026-04-02 01:03:25.657557 | orchestrator | 2026-04-02 01:03:25 | INFO  | Task 7ea6187d-bde3-485c-9d5e-be0cd88de8a2 is in state STARTED 2026-04-02 01:03:25.659452 | orchestrator | 2026-04-02 01:03:25 | INFO  | Task 3ca9d9b3-5647-4e0f-8bfb-a42a4b2302c0 is in state STARTED 2026-04-02 01:03:25.659546 | orchestrator | 2026-04-02 01:03:25 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:03:28.706922 | orchestrator | 2026-04-02 01:03:28 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:03:28.707496 | orchestrator | 2026-04-02 01:03:28 | INFO  | Task d339e0ca-3ffa-4e8d-8491-33e0b1681f45 is in state STARTED 2026-04-02 01:03:28.708191 | orchestrator | 2026-04-02 01:03:28 | INFO  | Task 7ea6187d-bde3-485c-9d5e-be0cd88de8a2 is in state STARTED 2026-04-02 01:03:28.709283 | orchestrator | 2026-04-02 01:03:28 | INFO  | Task 3ca9d9b3-5647-4e0f-8bfb-a42a4b2302c0 is in state STARTED 2026-04-02 01:03:28.709313 | orchestrator | 2026-04-02 01:03:28 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:03:31.883688 | orchestrator | 2026-04-02 01:03:31 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:03:31.884023 | orchestrator | 2026-04-02 01:03:31 | INFO  | Task d339e0ca-3ffa-4e8d-8491-33e0b1681f45 is in state STARTED 2026-04-02 01:03:31.884839 | orchestrator | 2026-04-02 01:03:31 | INFO  | Task 7ea6187d-bde3-485c-9d5e-be0cd88de8a2 is in state STARTED 2026-04-02 01:03:31.885400 | orchestrator | 2026-04-02 01:03:31 | INFO  | Task 3ca9d9b3-5647-4e0f-8bfb-a42a4b2302c0 is in state STARTED 2026-04-02 01:03:31.885424 | orchestrator | 2026-04-02 01:03:31 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:03:34.913499 | orchestrator | 2026-04-02 01:03:34 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:03:34.914137 | orchestrator | 2026-04-02 01:03:34 | INFO  | Task d339e0ca-3ffa-4e8d-8491-33e0b1681f45 is in state STARTED 2026-04-02 01:03:34.914723 | orchestrator | 2026-04-02 01:03:34 | INFO  | Task 7ea6187d-bde3-485c-9d5e-be0cd88de8a2 is in state STARTED 2026-04-02 01:03:34.915849 | orchestrator | 2026-04-02 01:03:34 | INFO  | Task 3ca9d9b3-5647-4e0f-8bfb-a42a4b2302c0 is in state STARTED 2026-04-02 01:03:34.915896 | orchestrator | 2026-04-02 01:03:34 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:03:37.948910 | orchestrator | 2026-04-02 01:03:37 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:03:37.949097 | orchestrator | 2026-04-02 01:03:37 | INFO  | Task d339e0ca-3ffa-4e8d-8491-33e0b1681f45 is in state STARTED 2026-04-02 01:03:37.949766 | orchestrator | 2026-04-02 01:03:37 | INFO  | Task 7ea6187d-bde3-485c-9d5e-be0cd88de8a2 is in state STARTED 2026-04-02 01:03:37.950756 | orchestrator | 2026-04-02 01:03:37 | INFO  | Task 3ca9d9b3-5647-4e0f-8bfb-a42a4b2302c0 is in state STARTED 2026-04-02 01:03:37.950801 | orchestrator | 2026-04-02 01:03:37 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:03:40.975016 | orchestrator | 2026-04-02 01:03:40 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:03:40.975193 | orchestrator | 2026-04-02 01:03:40 | INFO  | Task d339e0ca-3ffa-4e8d-8491-33e0b1681f45 is in state STARTED 2026-04-02 01:03:40.976189 | orchestrator | 2026-04-02 01:03:40 | INFO  | Task 7ea6187d-bde3-485c-9d5e-be0cd88de8a2 is in state STARTED 2026-04-02 01:03:40.976386 | orchestrator | 2026-04-02 01:03:40 | INFO  | Task 3ca9d9b3-5647-4e0f-8bfb-a42a4b2302c0 is in state STARTED 2026-04-02 01:03:40.976412 | orchestrator | 2026-04-02 01:03:40 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:03:44.008843 | orchestrator | 2026-04-02 01:03:44 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:03:44.010505 | orchestrator | 2026-04-02 01:03:44 | INFO  | Task d339e0ca-3ffa-4e8d-8491-33e0b1681f45 is in state SUCCESS 2026-04-02 01:03:44.011308 | orchestrator | 2026-04-02 01:03:44 | INFO  | Task 7ea6187d-bde3-485c-9d5e-be0cd88de8a2 is in state STARTED 2026-04-02 01:03:44.012363 | orchestrator | 2026-04-02 01:03:44 | INFO  | Task 3ca9d9b3-5647-4e0f-8bfb-a42a4b2302c0 is in state STARTED 2026-04-02 01:03:44.013157 | orchestrator | 2026-04-02 01:03:44 | INFO  | Task 3900a125-c4f4-4fad-93c1-61992c356777 is in state STARTED 2026-04-02 01:03:44.013191 | orchestrator | 2026-04-02 01:03:44 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:03:47.037929 | orchestrator | 2026-04-02 01:03:47 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:03:47.038889 | orchestrator | 2026-04-02 01:03:47 | INFO  | Task 7ea6187d-bde3-485c-9d5e-be0cd88de8a2 is in state STARTED 2026-04-02 01:03:47.040100 | orchestrator | 2026-04-02 01:03:47 | INFO  | Task 3ca9d9b3-5647-4e0f-8bfb-a42a4b2302c0 is in state STARTED 2026-04-02 01:03:47.040740 | orchestrator | 2026-04-02 01:03:47 | INFO  | Task 3900a125-c4f4-4fad-93c1-61992c356777 is in state STARTED 2026-04-02 01:03:47.040871 | orchestrator | 2026-04-02 01:03:47 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:03:50.073061 | orchestrator | 2026-04-02 01:03:50 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:03:50.073351 | orchestrator | 2026-04-02 01:03:50 | INFO  | Task 7ea6187d-bde3-485c-9d5e-be0cd88de8a2 is in state STARTED 2026-04-02 01:03:50.074198 | orchestrator | 2026-04-02 01:03:50 | INFO  | Task 3ca9d9b3-5647-4e0f-8bfb-a42a4b2302c0 is in state STARTED 2026-04-02 01:03:50.075643 | orchestrator | 2026-04-02 01:03:50 | INFO  | Task 3900a125-c4f4-4fad-93c1-61992c356777 is in state STARTED 2026-04-02 01:03:50.075690 | orchestrator | 2026-04-02 01:03:50 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:03:53.105673 | orchestrator | 2026-04-02 01:03:53 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:03:53.107153 | orchestrator | 2026-04-02 01:03:53 | INFO  | Task 7ea6187d-bde3-485c-9d5e-be0cd88de8a2 is in state STARTED 2026-04-02 01:03:53.107736 | orchestrator | 2026-04-02 01:03:53 | INFO  | Task 3ca9d9b3-5647-4e0f-8bfb-a42a4b2302c0 is in state STARTED 2026-04-02 01:03:53.108393 | orchestrator | 2026-04-02 01:03:53 | INFO  | Task 3900a125-c4f4-4fad-93c1-61992c356777 is in state STARTED 2026-04-02 01:03:53.108413 | orchestrator | 2026-04-02 01:03:53 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:03:56.140716 | orchestrator | 2026-04-02 01:03:56 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:03:56.141021 | orchestrator | 2026-04-02 01:03:56 | INFO  | Task 7ea6187d-bde3-485c-9d5e-be0cd88de8a2 is in state STARTED 2026-04-02 01:03:56.141810 | orchestrator | 2026-04-02 01:03:56 | INFO  | Task 3ca9d9b3-5647-4e0f-8bfb-a42a4b2302c0 is in state STARTED 2026-04-02 01:03:56.142579 | orchestrator | 2026-04-02 01:03:56 | INFO  | Task 3900a125-c4f4-4fad-93c1-61992c356777 is in state STARTED 2026-04-02 01:03:56.142652 | orchestrator | 2026-04-02 01:03:56 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:03:59.167258 | orchestrator | 2026-04-02 01:03:59 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:03:59.167708 | orchestrator | 2026-04-02 01:03:59 | INFO  | Task 7ea6187d-bde3-485c-9d5e-be0cd88de8a2 is in state STARTED 2026-04-02 01:03:59.171248 | orchestrator | 2026-04-02 01:03:59 | INFO  | Task 3ca9d9b3-5647-4e0f-8bfb-a42a4b2302c0 is in state STARTED 2026-04-02 01:03:59.176027 | orchestrator | 2026-04-02 01:03:59 | INFO  | Task 3900a125-c4f4-4fad-93c1-61992c356777 is in state STARTED 2026-04-02 01:03:59.176082 | orchestrator | 2026-04-02 01:03:59 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:04:02.203792 | orchestrator | 2026-04-02 01:04:02 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:04:02.204090 | orchestrator | 2026-04-02 01:04:02 | INFO  | Task 7ea6187d-bde3-485c-9d5e-be0cd88de8a2 is in state STARTED 2026-04-02 01:04:02.205522 | orchestrator | 2026-04-02 01:04:02 | INFO  | Task 3ca9d9b3-5647-4e0f-8bfb-a42a4b2302c0 is in state STARTED 2026-04-02 01:04:02.208750 | orchestrator | 2026-04-02 01:04:02 | INFO  | Task 3900a125-c4f4-4fad-93c1-61992c356777 is in state STARTED 2026-04-02 01:04:02.208797 | orchestrator | 2026-04-02 01:04:02 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:04:05.240423 | orchestrator | 2026-04-02 01:04:05 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:04:05.242311 | orchestrator | 2026-04-02 01:04:05 | INFO  | Task 7ea6187d-bde3-485c-9d5e-be0cd88de8a2 is in state SUCCESS 2026-04-02 01:04:05.242694 | orchestrator | 2026-04-02 01:04:05.242731 | orchestrator | 2026-04-02 01:04:05.242738 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2026-04-02 01:04:05.242746 | orchestrator | 2026-04-02 01:04:05.242751 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2026-04-02 01:04:05.242756 | orchestrator | Thursday 02 April 2026 01:02:55 +0000 (0:00:00.102) 0:00:00.102 ******** 2026-04-02 01:04:05.242759 | orchestrator | changed: [localhost] 2026-04-02 01:04:05.242763 | orchestrator | 2026-04-02 01:04:05.242766 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2026-04-02 01:04:05.242769 | orchestrator | Thursday 02 April 2026 01:02:56 +0000 (0:00:00.919) 0:00:01.021 ******** 2026-04-02 01:04:05.242773 | orchestrator | changed: [localhost] 2026-04-02 01:04:05.242776 | orchestrator | 2026-04-02 01:04:05.242779 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2026-04-02 01:04:05.242782 | orchestrator | Thursday 02 April 2026 01:03:35 +0000 (0:00:38.659) 0:00:39.681 ******** 2026-04-02 01:04:05.242785 | orchestrator | changed: [localhost] 2026-04-02 01:04:05.242808 | orchestrator | 2026-04-02 01:04:05.242835 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-02 01:04:05.242843 | orchestrator | 2026-04-02 01:04:05.242850 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-02 01:04:05.242855 | orchestrator | Thursday 02 April 2026 01:03:41 +0000 (0:00:06.247) 0:00:45.929 ******** 2026-04-02 01:04:05.242861 | orchestrator | ok: [testbed-node-0] 2026-04-02 01:04:05.242866 | orchestrator | ok: [testbed-node-1] 2026-04-02 01:04:05.242872 | orchestrator | ok: [testbed-node-2] 2026-04-02 01:04:05.242877 | orchestrator | 2026-04-02 01:04:05.242882 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-02 01:04:05.242888 | orchestrator | Thursday 02 April 2026 01:03:42 +0000 (0:00:00.266) 0:00:46.196 ******** 2026-04-02 01:04:05.242893 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2026-04-02 01:04:05.242898 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2026-04-02 01:04:05.242904 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2026-04-02 01:04:05.242908 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2026-04-02 01:04:05.242913 | orchestrator | 2026-04-02 01:04:05.242918 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2026-04-02 01:04:05.242924 | orchestrator | skipping: no hosts matched 2026-04-02 01:04:05.242930 | orchestrator | 2026-04-02 01:04:05.242935 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-02 01:04:05.242941 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-02 01:04:05.242947 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-02 01:04:05.242968 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-02 01:04:05.242974 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-02 01:04:05.242980 | orchestrator | 2026-04-02 01:04:05.242985 | orchestrator | 2026-04-02 01:04:05.242991 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-02 01:04:05.242996 | orchestrator | Thursday 02 April 2026 01:03:42 +0000 (0:00:00.343) 0:00:46.539 ******** 2026-04-02 01:04:05.243002 | orchestrator | =============================================================================== 2026-04-02 01:04:05.243007 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 38.66s 2026-04-02 01:04:05.243027 | orchestrator | Download ironic-agent kernel -------------------------------------------- 6.25s 2026-04-02 01:04:05.243032 | orchestrator | Ensure the destination directory exists --------------------------------- 0.92s 2026-04-02 01:04:05.243037 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.34s 2026-04-02 01:04:05.243057 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.27s 2026-04-02 01:04:05.243063 | orchestrator | 2026-04-02 01:04:05.243993 | orchestrator | 2026-04-02 01:04:05.244064 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-02 01:04:05.244072 | orchestrator | 2026-04-02 01:04:05.244077 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-02 01:04:05.244083 | orchestrator | Thursday 02 April 2026 01:01:03 +0000 (0:00:00.270) 0:00:00.270 ******** 2026-04-02 01:04:05.244089 | orchestrator | ok: [testbed-node-0] 2026-04-02 01:04:05.244095 | orchestrator | ok: [testbed-node-1] 2026-04-02 01:04:05.244100 | orchestrator | ok: [testbed-node-2] 2026-04-02 01:04:05.244106 | orchestrator | 2026-04-02 01:04:05.244111 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-02 01:04:05.244117 | orchestrator | Thursday 02 April 2026 01:01:03 +0000 (0:00:00.239) 0:00:00.509 ******** 2026-04-02 01:04:05.244123 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-04-02 01:04:05.244129 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-04-02 01:04:05.244134 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-04-02 01:04:05.244140 | orchestrator | 2026-04-02 01:04:05.244146 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-04-02 01:04:05.244152 | orchestrator | 2026-04-02 01:04:05.244157 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-02 01:04:05.244163 | orchestrator | Thursday 02 April 2026 01:01:04 +0000 (0:00:00.250) 0:00:00.760 ******** 2026-04-02 01:04:05.244169 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 01:04:05.244176 | orchestrator | 2026-04-02 01:04:05.244181 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2026-04-02 01:04:05.244187 | orchestrator | Thursday 02 April 2026 01:01:04 +0000 (0:00:00.536) 0:00:01.296 ******** 2026-04-02 01:04:05.244192 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2026-04-02 01:04:05.244197 | orchestrator | 2026-04-02 01:04:05.244202 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2026-04-02 01:04:05.244208 | orchestrator | Thursday 02 April 2026 01:01:08 +0000 (0:00:04.128) 0:00:05.425 ******** 2026-04-02 01:04:05.244214 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2026-04-02 01:04:05.244220 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2026-04-02 01:04:05.244225 | orchestrator | 2026-04-02 01:04:05.244231 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2026-04-02 01:04:05.244236 | orchestrator | Thursday 02 April 2026 01:01:15 +0000 (0:00:07.102) 0:00:12.527 ******** 2026-04-02 01:04:05.244241 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-02 01:04:05.244247 | orchestrator | 2026-04-02 01:04:05.244261 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2026-04-02 01:04:05.244266 | orchestrator | Thursday 02 April 2026 01:01:19 +0000 (0:00:03.753) 0:00:16.281 ******** 2026-04-02 01:04:05.244271 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2026-04-02 01:04:05.244276 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-02 01:04:05.244281 | orchestrator | 2026-04-02 01:04:05.244286 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2026-04-02 01:04:05.244292 | orchestrator | Thursday 02 April 2026 01:01:23 +0000 (0:00:04.262) 0:00:20.544 ******** 2026-04-02 01:04:05.244298 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-02 01:04:05.244316 | orchestrator | 2026-04-02 01:04:05.244343 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2026-04-02 01:04:05.244398 | orchestrator | Thursday 02 April 2026 01:01:27 +0000 (0:00:03.679) 0:00:24.223 ******** 2026-04-02 01:04:05.244421 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2026-04-02 01:04:05.244426 | orchestrator | 2026-04-02 01:04:05.244431 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-04-02 01:04:05.244436 | orchestrator | Thursday 02 April 2026 01:01:31 +0000 (0:00:04.364) 0:00:28.587 ******** 2026-04-02 01:04:05.244444 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-02 01:04:05.244462 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-02 01:04:05.244468 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-02 01:04:05.244474 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-02 01:04:05.244484 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-02 01:04:05.244498 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-02 01:04:05.244503 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-02 01:04:05.244513 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-02 01:04:05.244517 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-02 01:04:05.244522 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-02 01:04:05.244530 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-02 01:04:05.244542 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-02 01:04:05.244547 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-02 01:04:05.244553 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-02 01:04:05.244562 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-02 01:04:05.244568 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-02 01:04:05.245050 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-02 01:04:05.245067 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-02 01:04:05.245081 | orchestrator | 2026-04-02 01:04:05.245088 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-04-02 01:04:05.245094 | orchestrator | Thursday 02 April 2026 01:01:37 +0000 (0:00:05.365) 0:00:33.953 ******** 2026-04-02 01:04:05.245100 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:04:05.245106 | orchestrator | 2026-04-02 01:04:05.245111 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-04-02 01:04:05.245117 | orchestrator | Thursday 02 April 2026 01:01:37 +0000 (0:00:00.210) 0:00:34.163 ******** 2026-04-02 01:04:05.245123 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:04:05.245127 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:04:05.245131 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:04:05.245134 | orchestrator | 2026-04-02 01:04:05.245137 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-02 01:04:05.245140 | orchestrator | Thursday 02 April 2026 01:01:38 +0000 (0:00:00.547) 0:00:34.711 ******** 2026-04-02 01:04:05.245143 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 01:04:05.245147 | orchestrator | 2026-04-02 01:04:05.245150 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-04-02 01:04:05.245153 | orchestrator | Thursday 02 April 2026 01:01:38 +0000 (0:00:00.602) 0:00:35.314 ******** 2026-04-02 01:04:05.245157 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-02 01:04:05.245168 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-02 01:04:05.245172 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-02 01:04:05.245181 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-02 01:04:05.245185 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-02 01:04:05.245188 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-02 01:04:05.245191 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-02 01:04:05.245197 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-02 01:04:05.245200 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-02 01:04:05.245208 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-02 01:04:05.245216 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-02 01:04:05.245219 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-02 01:04:05.245222 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-02 01:04:05.245229 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-02 01:04:05.245232 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-02 01:04:05.245235 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-02 01:04:05.245242 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-02 01:04:05.245250 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-02 01:04:05.245255 | orchestrator | 2026-04-02 01:04:05.245260 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-04-02 01:04:05.245265 | orchestrator | Thursday 02 April 2026 01:01:46 +0000 (0:00:07.397) 0:00:42.712 ******** 2026-04-02 01:04:05.245271 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-02 01:04:05.245276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-02 01:04:05.245283 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-02 01:04:05.245289 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-02 01:04:05.245293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-02 01:04:05.245297 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-02 01:04:05.245301 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:04:05.245304 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-02 01:04:05.245307 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-02 01:04:05.245539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-02 01:04:05.245557 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-02 01:04:05.245563 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-02 01:04:05.245572 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-02 01:04:05.245588 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-02 01:04:05.245594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-02 01:04:05.245602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-02 01:04:05.245608 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-02 01:04:05.245611 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-02 01:04:05.245615 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:04:05.245620 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-02 01:04:05.245624 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:04:05.245627 | orchestrator | 2026-04-02 01:04:05.245630 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-04-02 01:04:05.245634 | orchestrator | Thursday 02 April 2026 01:01:47 +0000 (0:00:01.540) 0:00:44.253 ******** 2026-04-02 01:04:05.245637 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-02 01:04:05.245641 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-02 01:04:05.245647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-02 01:04:05.245657 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-02 01:04:05.245661 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-02 01:04:05.245666 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-02 01:04:05.245670 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:04:05.245673 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-02 01:04:05.245676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-02 01:04:05.245681 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-02 01:04:05.245688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-02 01:04:05.245691 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-02 01:04:05.245694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-02 01:04:05.245698 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:04:05.245703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-02 01:04:05.245706 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-02 01:04:05.245710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-02 01:04:05.245717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-02 01:04:05.245721 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-02 01:04:05.245724 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-02 01:04:05.245727 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:04:05.245730 | orchestrator | 2026-04-02 01:04:05.245734 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-04-02 01:04:05.245737 | orchestrator | Thursday 02 April 2026 01:01:49 +0000 (0:00:02.030) 0:00:46.283 ******** 2026-04-02 01:04:05.245743 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-02 01:04:05.245746 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-02 01:04:05.245757 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-02 01:04:05.245760 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-02 01:04:05.245764 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-02 01:04:05.245770 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-02 01:04:05.245776 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-02 01:04:05.245783 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-02 01:04:05.245799 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-02 01:04:05.245805 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-02 01:04:05.245810 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-02 01:04:05.245816 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-02 01:04:05.245824 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-02 01:04:05.245830 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-02 01:04:05.245840 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-02 01:04:05.245848 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-02 01:04:05.245854 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-02 01:04:05.245860 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-02 01:04:05.245865 | orchestrator | 2026-04-02 01:04:05.245870 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-04-02 01:04:05.245875 | orchestrator | Thursday 02 April 2026 01:01:57 +0000 (0:00:08.125) 0:00:54.409 ******** 2026-04-02 01:04:05.245883 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-02 01:04:05.245889 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-02 01:04:05.245897 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-02 01:04:05.246101 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-02 01:04:05.246108 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-02 01:04:05.246111 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-02 01:04:05.246117 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-02 01:04:05.246121 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-02 01:04:05.246128 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-02 01:04:05.246135 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-02 01:04:05.246138 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-02 01:04:05.246143 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-02 01:04:05.246148 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-02 01:04:05.246158 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-02 01:04:05.246170 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-02 01:04:05.246176 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-02 01:04:05.246186 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-02 01:04:05.246192 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-02 01:04:05.246198 | orchestrator | 2026-04-02 01:04:05.246204 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-04-02 01:04:05.246209 | orchestrator | Thursday 02 April 2026 01:02:19 +0000 (0:00:21.690) 0:01:16.099 ******** 2026-04-02 01:04:05.246212 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-04-02 01:04:05.246216 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-04-02 01:04:05.246221 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-04-02 01:04:05.246226 | orchestrator | 2026-04-02 01:04:05.246231 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-04-02 01:04:05.246236 | orchestrator | Thursday 02 April 2026 01:02:24 +0000 (0:00:05.441) 0:01:21.541 ******** 2026-04-02 01:04:05.246241 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-04-02 01:04:05.246246 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-04-02 01:04:05.246252 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-04-02 01:04:05.246257 | orchestrator | 2026-04-02 01:04:05.246263 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-04-02 01:04:05.246267 | orchestrator | Thursday 02 April 2026 01:02:28 +0000 (0:00:03.183) 0:01:24.724 ******** 2026-04-02 01:04:05.246276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-02 01:04:05.246282 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-02 01:04:05.246290 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-02 01:04:05.246296 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-02 01:04:05.246302 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-02 01:04:05.246310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-02 01:04:05.246320 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-02 01:04:05.246325 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-02 01:04:05.246330 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-02 01:04:05.246338 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-02 01:04:05.246344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-02 01:04:05.246348 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-02 01:04:05.246362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-02 01:04:05.246369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-02 01:04:05.246374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-02 01:04:05.246379 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-02 01:04:05.246388 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-02 01:04:05.246393 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-02 01:04:05.246398 | orchestrator | 2026-04-02 01:04:05.246408 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-04-02 01:04:05.246414 | orchestrator | Thursday 02 April 2026 01:02:31 +0000 (0:00:03.217) 0:01:27.942 ******** 2026-04-02 01:04:05.246425 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-02 01:04:05.246432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-02 01:04:05.246437 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-02 01:04:05.246445 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-02 01:04:05.246451 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-02 01:04:05.246459 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-02 01:04:05.246466 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-02 01:04:05.246471 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-02 01:04:05.246476 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-02 01:04:05.246480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-02 01:04:05.246488 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-02 01:04:05.246492 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-02 01:04:05.246500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-02 01:04:05.246506 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-02 01:04:05.246511 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-02 01:04:05.246539 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-02 01:04:05.246548 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-02 01:04:05.246553 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-02 01:04:05.246562 | orchestrator | 2026-04-02 01:04:05.246567 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-02 01:04:05.246572 | orchestrator | Thursday 02 April 2026 01:02:34 +0000 (0:00:03.228) 0:01:31.170 ******** 2026-04-02 01:04:05.246651 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:04:05.246654 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:04:05.246658 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:04:05.246661 | orchestrator | 2026-04-02 01:04:05.246664 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-04-02 01:04:05.246667 | orchestrator | Thursday 02 April 2026 01:02:34 +0000 (0:00:00.230) 0:01:31.400 ******** 2026-04-02 01:04:05.246673 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-02 01:04:05.246677 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-02 01:04:05.246680 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-02 01:04:05.246684 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-02 01:04:05.246691 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-02 01:04:05.246698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-02 01:04:05.246701 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:04:05.246704 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-02 01:04:05.246709 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-02 01:04:05.246713 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-02 01:04:05.246716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-02 01:04:05.246722 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-02 01:04:05.246728 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-02 01:04:05.246732 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:04:05.246736 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-02 01:04:05.246742 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-02 01:04:05.246746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-02 01:04:05.246750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-02 01:04:05.246754 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-02 01:04:05.246763 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-02 01:04:05.246767 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:04:05.246771 | orchestrator | 2026-04-02 01:04:05.246774 | orchestrator | TASK [designate : Check designate containers] ********************************** 2026-04-02 01:04:05.246779 | orchestrator | Thursday 02 April 2026 01:02:36 +0000 (0:00:01.561) 0:01:32.962 ******** 2026-04-02 01:04:05.246782 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-02 01:04:05.246790 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-02 01:04:05.246795 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-02 01:04:05.246798 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-02 01:04:05.246807 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-02 01:04:05.246811 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-02 01:04:05.246816 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-02 01:04:05.246822 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-02 01:04:05.246826 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-02 01:04:05.246830 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-02 01:04:05.246837 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-02 01:04:05.246842 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-02 01:04:05.246846 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-02 01:04:05.246850 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-02 01:04:05.246855 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-02 01:04:05.246858 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-02 01:04:05.246861 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-02 01:04:05.246869 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-02 01:04:05.246873 | orchestrator | 2026-04-02 01:04:05.246878 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-02 01:04:05.246885 | orchestrator | Thursday 02 April 2026 01:02:41 +0000 (0:00:05.544) 0:01:38.507 ******** 2026-04-02 01:04:05.246893 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:04:05.246899 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:04:05.246904 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:04:05.246909 | orchestrator | 2026-04-02 01:04:05.246914 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2026-04-02 01:04:05.246919 | orchestrator | Thursday 02 April 2026 01:02:42 +0000 (0:00:01.045) 0:01:39.552 ******** 2026-04-02 01:04:05.246925 | orchestrator | changed: [testbed-node-0] => (item=designate) 2026-04-02 01:04:05.246931 | orchestrator | 2026-04-02 01:04:05.246936 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2026-04-02 01:04:05.246941 | orchestrator | Thursday 02 April 2026 01:02:45 +0000 (0:00:02.283) 0:01:41.835 ******** 2026-04-02 01:04:05.246948 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-02 01:04:05.246951 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2026-04-02 01:04:05.246954 | orchestrator | 2026-04-02 01:04:05.246957 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-04-02 01:04:05.246961 | orchestrator | Thursday 02 April 2026 01:02:47 +0000 (0:00:02.311) 0:01:44.146 ******** 2026-04-02 01:04:05.246966 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:04:05.246972 | orchestrator | 2026-04-02 01:04:05.246977 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-04-02 01:04:05.246982 | orchestrator | Thursday 02 April 2026 01:03:02 +0000 (0:00:14.582) 0:01:58.728 ******** 2026-04-02 01:04:05.246987 | orchestrator | 2026-04-02 01:04:05.246992 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-04-02 01:04:05.246997 | orchestrator | Thursday 02 April 2026 01:03:02 +0000 (0:00:00.070) 0:01:58.799 ******** 2026-04-02 01:04:05.247000 | orchestrator | 2026-04-02 01:04:05.247003 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-04-02 01:04:05.247007 | orchestrator | Thursday 02 April 2026 01:03:02 +0000 (0:00:00.071) 0:01:58.870 ******** 2026-04-02 01:04:05.247012 | orchestrator | 2026-04-02 01:04:05.247020 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-04-02 01:04:05.247025 | orchestrator | Thursday 02 April 2026 01:03:02 +0000 (0:00:00.075) 0:01:58.946 ******** 2026-04-02 01:04:05.247030 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:04:05.247035 | orchestrator | changed: [testbed-node-2] 2026-04-02 01:04:05.247040 | orchestrator | changed: [testbed-node-1] 2026-04-02 01:04:05.247044 | orchestrator | 2026-04-02 01:04:05.247053 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-04-02 01:04:05.247059 | orchestrator | Thursday 02 April 2026 01:03:16 +0000 (0:00:14.028) 0:02:12.975 ******** 2026-04-02 01:04:05.247065 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:04:05.247072 | orchestrator | changed: [testbed-node-1] 2026-04-02 01:04:05.247075 | orchestrator | changed: [testbed-node-2] 2026-04-02 01:04:05.247078 | orchestrator | 2026-04-02 01:04:05.247082 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-04-02 01:04:05.247085 | orchestrator | Thursday 02 April 2026 01:03:27 +0000 (0:00:11.458) 0:02:24.433 ******** 2026-04-02 01:04:05.247088 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:04:05.247091 | orchestrator | changed: [testbed-node-1] 2026-04-02 01:04:05.247096 | orchestrator | changed: [testbed-node-2] 2026-04-02 01:04:05.247102 | orchestrator | 2026-04-02 01:04:05.247107 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-04-02 01:04:05.247112 | orchestrator | Thursday 02 April 2026 01:03:34 +0000 (0:00:06.797) 0:02:31.230 ******** 2026-04-02 01:04:05.247117 | orchestrator | changed: [testbed-node-1] 2026-04-02 01:04:05.247120 | orchestrator | changed: [testbed-node-2] 2026-04-02 01:04:05.247123 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:04:05.247126 | orchestrator | 2026-04-02 01:04:05.247132 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-04-02 01:04:05.247137 | orchestrator | Thursday 02 April 2026 01:03:44 +0000 (0:00:10.006) 0:02:41.237 ******** 2026-04-02 01:04:05.247141 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:04:05.247146 | orchestrator | changed: [testbed-node-2] 2026-04-02 01:04:05.247151 | orchestrator | changed: [testbed-node-1] 2026-04-02 01:04:05.247157 | orchestrator | 2026-04-02 01:04:05.247162 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-04-02 01:04:05.247170 | orchestrator | Thursday 02 April 2026 01:03:51 +0000 (0:00:06.673) 0:02:47.911 ******** 2026-04-02 01:04:05.247174 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:04:05.247179 | orchestrator | changed: [testbed-node-1] 2026-04-02 01:04:05.247184 | orchestrator | changed: [testbed-node-2] 2026-04-02 01:04:05.247220 | orchestrator | 2026-04-02 01:04:05.247226 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-04-02 01:04:05.247231 | orchestrator | Thursday 02 April 2026 01:03:57 +0000 (0:00:06.008) 0:02:53.919 ******** 2026-04-02 01:04:05.247236 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:04:05.247241 | orchestrator | 2026-04-02 01:04:05.247246 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-02 01:04:05.247253 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-02 01:04:05.247259 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-02 01:04:05.247264 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-02 01:04:05.247269 | orchestrator | 2026-04-02 01:04:05.247274 | orchestrator | 2026-04-02 01:04:05.247289 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-02 01:04:05.247295 | orchestrator | Thursday 02 April 2026 01:04:04 +0000 (0:00:07.662) 0:03:01.581 ******** 2026-04-02 01:04:05.247300 | orchestrator | =============================================================================== 2026-04-02 01:04:05.247305 | orchestrator | designate : Copying over designate.conf -------------------------------- 21.69s 2026-04-02 01:04:05.247310 | orchestrator | designate : Running Designate bootstrap container ---------------------- 14.58s 2026-04-02 01:04:05.247315 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 14.03s 2026-04-02 01:04:05.247320 | orchestrator | designate : Restart designate-api container ---------------------------- 11.46s 2026-04-02 01:04:05.247325 | orchestrator | designate : Restart designate-producer container ----------------------- 10.01s 2026-04-02 01:04:05.247330 | orchestrator | designate : Copying over config.json files for services ----------------- 8.13s 2026-04-02 01:04:05.247335 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.66s 2026-04-02 01:04:05.247347 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 7.40s 2026-04-02 01:04:05.247353 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 7.10s 2026-04-02 01:04:05.247358 | orchestrator | designate : Restart designate-central container ------------------------- 6.80s 2026-04-02 01:04:05.247364 | orchestrator | designate : Restart designate-mdns container ---------------------------- 6.67s 2026-04-02 01:04:05.247369 | orchestrator | designate : Restart designate-worker container -------------------------- 6.01s 2026-04-02 01:04:05.247374 | orchestrator | designate : Check designate containers ---------------------------------- 5.55s 2026-04-02 01:04:05.247379 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 5.44s 2026-04-02 01:04:05.247384 | orchestrator | designate : Ensuring config directories exist --------------------------- 5.37s 2026-04-02 01:04:05.247391 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.36s 2026-04-02 01:04:05.247396 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.26s 2026-04-02 01:04:05.247402 | orchestrator | service-ks-register : designate | Creating services --------------------- 4.13s 2026-04-02 01:04:05.247407 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.75s 2026-04-02 01:04:05.247413 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.68s 2026-04-02 01:04:05.247419 | orchestrator | 2026-04-02 01:04:05 | INFO  | Task 3ca9d9b3-5647-4e0f-8bfb-a42a4b2302c0 is in state STARTED 2026-04-02 01:04:05.247430 | orchestrator | 2026-04-02 01:04:05 | INFO  | Task 3900a125-c4f4-4fad-93c1-61992c356777 is in state STARTED 2026-04-02 01:04:05.247436 | orchestrator | 2026-04-02 01:04:05 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:04:08.290523 | orchestrator | 2026-04-02 01:04:08 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:04:08.291261 | orchestrator | 2026-04-02 01:04:08 | INFO  | Task b1ded223-f030-4800-bdf3-9ef9ec9b88fd is in state STARTED 2026-04-02 01:04:08.294221 | orchestrator | 2026-04-02 01:04:08 | INFO  | Task 3ca9d9b3-5647-4e0f-8bfb-a42a4b2302c0 is in state STARTED 2026-04-02 01:04:08.294644 | orchestrator | 2026-04-02 01:04:08 | INFO  | Task 3900a125-c4f4-4fad-93c1-61992c356777 is in state STARTED 2026-04-02 01:04:08.294674 | orchestrator | 2026-04-02 01:04:08 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:04:11.337902 | orchestrator | 2026-04-02 01:04:11 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:04:11.338597 | orchestrator | 2026-04-02 01:04:11 | INFO  | Task b1ded223-f030-4800-bdf3-9ef9ec9b88fd is in state STARTED 2026-04-02 01:04:11.339199 | orchestrator | 2026-04-02 01:04:11 | INFO  | Task 3ca9d9b3-5647-4e0f-8bfb-a42a4b2302c0 is in state STARTED 2026-04-02 01:04:11.339725 | orchestrator | 2026-04-02 01:04:11 | INFO  | Task 3900a125-c4f4-4fad-93c1-61992c356777 is in state STARTED 2026-04-02 01:04:11.339900 | orchestrator | 2026-04-02 01:04:11 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:04:14.370185 | orchestrator | 2026-04-02 01:04:14 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:04:14.372700 | orchestrator | 2026-04-02 01:04:14 | INFO  | Task b1ded223-f030-4800-bdf3-9ef9ec9b88fd is in state STARTED 2026-04-02 01:04:14.377482 | orchestrator | 2026-04-02 01:04:14 | INFO  | Task 3ca9d9b3-5647-4e0f-8bfb-a42a4b2302c0 is in state STARTED 2026-04-02 01:04:14.380702 | orchestrator | 2026-04-02 01:04:14 | INFO  | Task 3900a125-c4f4-4fad-93c1-61992c356777 is in state STARTED 2026-04-02 01:04:14.380755 | orchestrator | 2026-04-02 01:04:14 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:04:17.406011 | orchestrator | 2026-04-02 01:04:17 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:04:17.406318 | orchestrator | 2026-04-02 01:04:17 | INFO  | Task b1ded223-f030-4800-bdf3-9ef9ec9b88fd is in state STARTED 2026-04-02 01:04:17.407077 | orchestrator | 2026-04-02 01:04:17 | INFO  | Task 3ca9d9b3-5647-4e0f-8bfb-a42a4b2302c0 is in state STARTED 2026-04-02 01:04:17.407736 | orchestrator | 2026-04-02 01:04:17 | INFO  | Task 3900a125-c4f4-4fad-93c1-61992c356777 is in state STARTED 2026-04-02 01:04:17.407770 | orchestrator | 2026-04-02 01:04:17 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:04:20.448008 | orchestrator | 2026-04-02 01:04:20 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:04:20.448066 | orchestrator | 2026-04-02 01:04:20 | INFO  | Task b1ded223-f030-4800-bdf3-9ef9ec9b88fd is in state STARTED 2026-04-02 01:04:20.448073 | orchestrator | 2026-04-02 01:04:20 | INFO  | Task 3ca9d9b3-5647-4e0f-8bfb-a42a4b2302c0 is in state STARTED 2026-04-02 01:04:20.450185 | orchestrator | 2026-04-02 01:04:20 | INFO  | Task 3900a125-c4f4-4fad-93c1-61992c356777 is in state STARTED 2026-04-02 01:04:20.450241 | orchestrator | 2026-04-02 01:04:20 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:04:23.479321 | orchestrator | 2026-04-02 01:04:23 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:04:23.479935 | orchestrator | 2026-04-02 01:04:23 | INFO  | Task b1ded223-f030-4800-bdf3-9ef9ec9b88fd is in state STARTED 2026-04-02 01:04:23.480695 | orchestrator | 2026-04-02 01:04:23 | INFO  | Task 3ca9d9b3-5647-4e0f-8bfb-a42a4b2302c0 is in state STARTED 2026-04-02 01:04:23.481376 | orchestrator | 2026-04-02 01:04:23 | INFO  | Task 3900a125-c4f4-4fad-93c1-61992c356777 is in state STARTED 2026-04-02 01:04:23.481411 | orchestrator | 2026-04-02 01:04:23 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:04:26.513884 | orchestrator | 2026-04-02 01:04:26 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:04:26.514351 | orchestrator | 2026-04-02 01:04:26 | INFO  | Task b1ded223-f030-4800-bdf3-9ef9ec9b88fd is in state STARTED 2026-04-02 01:04:26.515015 | orchestrator | 2026-04-02 01:04:26 | INFO  | Task 3ca9d9b3-5647-4e0f-8bfb-a42a4b2302c0 is in state STARTED 2026-04-02 01:04:26.515705 | orchestrator | 2026-04-02 01:04:26 | INFO  | Task 3900a125-c4f4-4fad-93c1-61992c356777 is in state STARTED 2026-04-02 01:04:26.515762 | orchestrator | 2026-04-02 01:04:26 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:04:29.556567 | orchestrator | 2026-04-02 01:04:29 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:04:29.557458 | orchestrator | 2026-04-02 01:04:29 | INFO  | Task b1ded223-f030-4800-bdf3-9ef9ec9b88fd is in state STARTED 2026-04-02 01:04:29.559806 | orchestrator | 2026-04-02 01:04:29 | INFO  | Task 3ca9d9b3-5647-4e0f-8bfb-a42a4b2302c0 is in state STARTED 2026-04-02 01:04:29.560672 | orchestrator | 2026-04-02 01:04:29 | INFO  | Task 3900a125-c4f4-4fad-93c1-61992c356777 is in state STARTED 2026-04-02 01:04:29.560697 | orchestrator | 2026-04-02 01:04:29 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:04:32.608263 | orchestrator | 2026-04-02 01:04:32 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:04:32.610378 | orchestrator | 2026-04-02 01:04:32 | INFO  | Task b1ded223-f030-4800-bdf3-9ef9ec9b88fd is in state STARTED 2026-04-02 01:04:32.612328 | orchestrator | 2026-04-02 01:04:32 | INFO  | Task 3ca9d9b3-5647-4e0f-8bfb-a42a4b2302c0 is in state STARTED 2026-04-02 01:04:32.614156 | orchestrator | 2026-04-02 01:04:32 | INFO  | Task 3900a125-c4f4-4fad-93c1-61992c356777 is in state STARTED 2026-04-02 01:04:32.614428 | orchestrator | 2026-04-02 01:04:32 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:04:35.651033 | orchestrator | 2026-04-02 01:04:35 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:04:35.652184 | orchestrator | 2026-04-02 01:04:35 | INFO  | Task b1ded223-f030-4800-bdf3-9ef9ec9b88fd is in state STARTED 2026-04-02 01:04:35.653348 | orchestrator | 2026-04-02 01:04:35 | INFO  | Task 3ca9d9b3-5647-4e0f-8bfb-a42a4b2302c0 is in state STARTED 2026-04-02 01:04:35.653945 | orchestrator | 2026-04-02 01:04:35 | INFO  | Task 3900a125-c4f4-4fad-93c1-61992c356777 is in state STARTED 2026-04-02 01:04:35.653979 | orchestrator | 2026-04-02 01:04:35 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:04:38.689598 | orchestrator | 2026-04-02 01:04:38 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:04:38.692568 | orchestrator | 2026-04-02 01:04:38 | INFO  | Task c04f0d16-42fb-4c05-b604-3ab7be80e289 is in state STARTED 2026-04-02 01:04:38.696620 | orchestrator | 2026-04-02 01:04:38 | INFO  | Task b1ded223-f030-4800-bdf3-9ef9ec9b88fd is in state STARTED 2026-04-02 01:04:38.700862 | orchestrator | 2026-04-02 01:04:38 | INFO  | Task 3ca9d9b3-5647-4e0f-8bfb-a42a4b2302c0 is in state SUCCESS 2026-04-02 01:04:38.702309 | orchestrator | 2026-04-02 01:04:38.702355 | orchestrator | 2026-04-02 01:04:38.702361 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-02 01:04:38.702364 | orchestrator | 2026-04-02 01:04:38.702368 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-02 01:04:38.702371 | orchestrator | Thursday 02 April 2026 01:00:29 +0000 (0:00:00.319) 0:00:00.319 ******** 2026-04-02 01:04:38.702375 | orchestrator | ok: [testbed-node-0] 2026-04-02 01:04:38.702379 | orchestrator | ok: [testbed-node-1] 2026-04-02 01:04:38.702382 | orchestrator | ok: [testbed-node-2] 2026-04-02 01:04:38.702385 | orchestrator | ok: [testbed-node-3] 2026-04-02 01:04:38.702388 | orchestrator | ok: [testbed-node-4] 2026-04-02 01:04:38.702391 | orchestrator | ok: [testbed-node-5] 2026-04-02 01:04:38.702394 | orchestrator | 2026-04-02 01:04:38.702397 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-02 01:04:38.702400 | orchestrator | Thursday 02 April 2026 01:00:29 +0000 (0:00:00.557) 0:00:00.876 ******** 2026-04-02 01:04:38.702403 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-04-02 01:04:38.702406 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-04-02 01:04:38.702410 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-04-02 01:04:38.702413 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-04-02 01:04:38.702416 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-04-02 01:04:38.702419 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-04-02 01:04:38.702422 | orchestrator | 2026-04-02 01:04:38.702425 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-04-02 01:04:38.702428 | orchestrator | 2026-04-02 01:04:38.702431 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-02 01:04:38.702435 | orchestrator | Thursday 02 April 2026 01:00:31 +0000 (0:00:01.260) 0:00:02.137 ******** 2026-04-02 01:04:38.702438 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-02 01:04:38.702442 | orchestrator | 2026-04-02 01:04:38.702445 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-04-02 01:04:38.702448 | orchestrator | Thursday 02 April 2026 01:00:32 +0000 (0:00:01.511) 0:00:03.648 ******** 2026-04-02 01:04:38.702451 | orchestrator | ok: [testbed-node-0] 2026-04-02 01:04:38.702454 | orchestrator | ok: [testbed-node-2] 2026-04-02 01:04:38.702457 | orchestrator | ok: [testbed-node-1] 2026-04-02 01:04:38.702460 | orchestrator | ok: [testbed-node-3] 2026-04-02 01:04:38.702475 | orchestrator | ok: [testbed-node-4] 2026-04-02 01:04:38.702478 | orchestrator | ok: [testbed-node-5] 2026-04-02 01:04:38.702481 | orchestrator | 2026-04-02 01:04:38.702484 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-04-02 01:04:38.702487 | orchestrator | Thursday 02 April 2026 01:00:34 +0000 (0:00:01.817) 0:00:05.465 ******** 2026-04-02 01:04:38.702490 | orchestrator | ok: [testbed-node-3] 2026-04-02 01:04:38.702494 | orchestrator | ok: [testbed-node-0] 2026-04-02 01:04:38.702497 | orchestrator | ok: [testbed-node-4] 2026-04-02 01:04:38.702500 | orchestrator | ok: [testbed-node-1] 2026-04-02 01:04:38.702503 | orchestrator | ok: [testbed-node-2] 2026-04-02 01:04:38.702506 | orchestrator | ok: [testbed-node-5] 2026-04-02 01:04:38.702509 | orchestrator | 2026-04-02 01:04:38.702512 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-04-02 01:04:38.702515 | orchestrator | Thursday 02 April 2026 01:00:35 +0000 (0:00:01.227) 0:00:06.693 ******** 2026-04-02 01:04:38.702518 | orchestrator | ok: [testbed-node-0] => { 2026-04-02 01:04:38.702522 | orchestrator |  "changed": false, 2026-04-02 01:04:38.702525 | orchestrator |  "msg": "All assertions passed" 2026-04-02 01:04:38.702557 | orchestrator | } 2026-04-02 01:04:38.702561 | orchestrator | ok: [testbed-node-1] => { 2026-04-02 01:04:38.702564 | orchestrator |  "changed": false, 2026-04-02 01:04:38.702567 | orchestrator |  "msg": "All assertions passed" 2026-04-02 01:04:38.702570 | orchestrator | } 2026-04-02 01:04:38.702573 | orchestrator | ok: [testbed-node-2] => { 2026-04-02 01:04:38.702576 | orchestrator |  "changed": false, 2026-04-02 01:04:38.702579 | orchestrator |  "msg": "All assertions passed" 2026-04-02 01:04:38.702583 | orchestrator | } 2026-04-02 01:04:38.702586 | orchestrator | ok: [testbed-node-3] => { 2026-04-02 01:04:38.702589 | orchestrator |  "changed": false, 2026-04-02 01:04:38.702592 | orchestrator |  "msg": "All assertions passed" 2026-04-02 01:04:38.702595 | orchestrator | } 2026-04-02 01:04:38.702598 | orchestrator | ok: [testbed-node-4] => { 2026-04-02 01:04:38.702601 | orchestrator |  "changed": false, 2026-04-02 01:04:38.702604 | orchestrator |  "msg": "All assertions passed" 2026-04-02 01:04:38.702607 | orchestrator | } 2026-04-02 01:04:38.702610 | orchestrator | ok: [testbed-node-5] => { 2026-04-02 01:04:38.702613 | orchestrator |  "changed": false, 2026-04-02 01:04:38.702616 | orchestrator |  "msg": "All assertions passed" 2026-04-02 01:04:38.702619 | orchestrator | } 2026-04-02 01:04:38.702622 | orchestrator | 2026-04-02 01:04:38.702626 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-04-02 01:04:38.702629 | orchestrator | Thursday 02 April 2026 01:00:36 +0000 (0:00:00.521) 0:00:07.215 ******** 2026-04-02 01:04:38.702632 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:04:38.702635 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:04:38.702638 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:04:38.702641 | orchestrator | skipping: [testbed-node-3] 2026-04-02 01:04:38.702644 | orchestrator | skipping: [testbed-node-4] 2026-04-02 01:04:38.702647 | orchestrator | skipping: [testbed-node-5] 2026-04-02 01:04:38.702707 | orchestrator | 2026-04-02 01:04:38.702711 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2026-04-02 01:04:38.702714 | orchestrator | Thursday 02 April 2026 01:00:36 +0000 (0:00:00.615) 0:00:07.831 ******** 2026-04-02 01:04:38.702717 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2026-04-02 01:04:38.702721 | orchestrator | 2026-04-02 01:04:38.702727 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2026-04-02 01:04:38.702731 | orchestrator | Thursday 02 April 2026 01:00:40 +0000 (0:00:03.482) 0:00:11.313 ******** 2026-04-02 01:04:38.702890 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2026-04-02 01:04:38.702903 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2026-04-02 01:04:38.702907 | orchestrator | 2026-04-02 01:04:38.702921 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2026-04-02 01:04:38.702933 | orchestrator | Thursday 02 April 2026 01:00:46 +0000 (0:00:06.476) 0:00:17.790 ******** 2026-04-02 01:04:38.702938 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-02 01:04:38.702943 | orchestrator | 2026-04-02 01:04:38.702948 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2026-04-02 01:04:38.702953 | orchestrator | Thursday 02 April 2026 01:00:50 +0000 (0:00:03.903) 0:00:21.694 ******** 2026-04-02 01:04:38.702957 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2026-04-02 01:04:38.702963 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-02 01:04:38.702967 | orchestrator | 2026-04-02 01:04:38.702972 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2026-04-02 01:04:38.702977 | orchestrator | Thursday 02 April 2026 01:00:54 +0000 (0:00:04.171) 0:00:25.865 ******** 2026-04-02 01:04:38.702982 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-02 01:04:38.702986 | orchestrator | 2026-04-02 01:04:38.702991 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2026-04-02 01:04:38.702996 | orchestrator | Thursday 02 April 2026 01:00:57 +0000 (0:00:03.014) 0:00:28.880 ******** 2026-04-02 01:04:38.703001 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2026-04-02 01:04:38.703006 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2026-04-02 01:04:38.703012 | orchestrator | 2026-04-02 01:04:38.703017 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-02 01:04:38.703022 | orchestrator | Thursday 02 April 2026 01:01:05 +0000 (0:00:07.189) 0:00:36.069 ******** 2026-04-02 01:04:38.703028 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:04:38.703033 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:04:38.703038 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:04:38.703042 | orchestrator | skipping: [testbed-node-3] 2026-04-02 01:04:38.703048 | orchestrator | skipping: [testbed-node-4] 2026-04-02 01:04:38.703053 | orchestrator | skipping: [testbed-node-5] 2026-04-02 01:04:38.703058 | orchestrator | 2026-04-02 01:04:38.703063 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-04-02 01:04:38.703068 | orchestrator | Thursday 02 April 2026 01:01:05 +0000 (0:00:00.480) 0:00:36.550 ******** 2026-04-02 01:04:38.703073 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:04:38.703079 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:04:38.703084 | orchestrator | skipping: [testbed-node-3] 2026-04-02 01:04:38.703089 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:04:38.703095 | orchestrator | skipping: [testbed-node-4] 2026-04-02 01:04:38.703100 | orchestrator | skipping: [testbed-node-5] 2026-04-02 01:04:38.703104 | orchestrator | 2026-04-02 01:04:38.703109 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-04-02 01:04:38.703114 | orchestrator | Thursday 02 April 2026 01:01:07 +0000 (0:00:01.918) 0:00:38.468 ******** 2026-04-02 01:04:38.703119 | orchestrator | ok: [testbed-node-1] 2026-04-02 01:04:38.703125 | orchestrator | ok: [testbed-node-0] 2026-04-02 01:04:38.703130 | orchestrator | ok: [testbed-node-2] 2026-04-02 01:04:38.703135 | orchestrator | ok: [testbed-node-3] 2026-04-02 01:04:38.703140 | orchestrator | ok: [testbed-node-4] 2026-04-02 01:04:38.703145 | orchestrator | ok: [testbed-node-5] 2026-04-02 01:04:38.703150 | orchestrator | 2026-04-02 01:04:38.703154 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-04-02 01:04:38.703159 | orchestrator | Thursday 02 April 2026 01:01:08 +0000 (0:00:00.883) 0:00:39.352 ******** 2026-04-02 01:04:38.703164 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:04:38.703168 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:04:38.703173 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:04:38.703179 | orchestrator | skipping: [testbed-node-3] 2026-04-02 01:04:38.703184 | orchestrator | skipping: [testbed-node-4] 2026-04-02 01:04:38.703189 | orchestrator | skipping: [testbed-node-5] 2026-04-02 01:04:38.703194 | orchestrator | 2026-04-02 01:04:38.703199 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-04-02 01:04:38.703209 | orchestrator | Thursday 02 April 2026 01:01:10 +0000 (0:00:01.828) 0:00:41.180 ******** 2026-04-02 01:04:38.703216 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-02 01:04:38.703230 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-02 01:04:38.703236 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-02 01:04:38.703242 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-02 01:04:38.703247 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-02 01:04:38.703260 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-02 01:04:38.703265 | orchestrator | 2026-04-02 01:04:38.703270 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-04-02 01:04:38.703275 | orchestrator | Thursday 02 April 2026 01:01:12 +0000 (0:00:02.408) 0:00:43.589 ******** 2026-04-02 01:04:38.703281 | orchestrator | [WARNING]: Skipped 2026-04-02 01:04:38.703286 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-04-02 01:04:38.703292 | orchestrator | due to this access issue: 2026-04-02 01:04:38.703297 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-04-02 01:04:38.703303 | orchestrator | a directory 2026-04-02 01:04:38.703308 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-02 01:04:38.703313 | orchestrator | 2026-04-02 01:04:38.703318 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-02 01:04:38.703327 | orchestrator | Thursday 02 April 2026 01:01:13 +0000 (0:00:00.788) 0:00:44.377 ******** 2026-04-02 01:04:38.703333 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-02 01:04:38.703339 | orchestrator | 2026-04-02 01:04:38.703344 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-04-02 01:04:38.703350 | orchestrator | Thursday 02 April 2026 01:01:14 +0000 (0:00:01.189) 0:00:45.567 ******** 2026-04-02 01:04:38.703355 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-02 01:04:38.703361 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-02 01:04:38.703492 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-02 01:04:38.703505 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-02 01:04:38.703546 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-02 01:04:38.703554 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-02 01:04:38.703560 | orchestrator | 2026-04-02 01:04:38.703565 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-04-02 01:04:38.703571 | orchestrator | Thursday 02 April 2026 01:01:18 +0000 (0:00:03.510) 0:00:49.078 ******** 2026-04-02 01:04:38.703578 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-02 01:04:38.703586 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:04:38.703590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-02 01:04:38.703593 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:04:38.703596 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-02 01:04:38.703626 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:04:38.703634 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-02 01:04:38.703647 | orchestrator | skipping: [testbed-node-4] 2026-04-02 01:04:38.703657 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-02 01:04:38.703667 | orchestrator | skipping: [testbed-node-3] 2026-04-02 01:04:38.703672 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-02 01:04:38.703677 | orchestrator | skipping: [testbed-node-5] 2026-04-02 01:04:38.703682 | orchestrator | 2026-04-02 01:04:38.703689 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-04-02 01:04:38.703696 | orchestrator | Thursday 02 April 2026 01:01:19 +0000 (0:00:01.671) 0:00:50.750 ******** 2026-04-02 01:04:38.703701 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-02 01:04:38.703706 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:04:38.703717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-02 01:04:38.703722 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:04:38.703727 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-02 01:04:38.703737 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:04:38.703742 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-02 01:04:38.703747 | orchestrator | skipping: [testbed-node-5] 2026-04-02 01:04:38.703752 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-02 01:04:38.703758 | orchestrator | skipping: [testbed-node-4] 2026-04-02 01:04:38.703762 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-02 01:04:38.703767 | orchestrator | skipping: [testbed-node-3] 2026-04-02 01:04:38.703772 | orchestrator | 2026-04-02 01:04:38.703776 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-04-02 01:04:38.703781 | orchestrator | Thursday 02 April 2026 01:01:22 +0000 (0:00:02.929) 0:00:53.679 ******** 2026-04-02 01:04:38.703786 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:04:38.703791 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:04:38.703796 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:04:38.703802 | orchestrator | skipping: [testbed-node-4] 2026-04-02 01:04:38.703807 | orchestrator | skipping: [testbed-node-3] 2026-04-02 01:04:38.703812 | orchestrator | skipping: [testbed-node-5] 2026-04-02 01:04:38.703818 | orchestrator | 2026-04-02 01:04:38.703823 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-04-02 01:04:38.703832 | orchestrator | Thursday 02 April 2026 01:01:24 +0000 (0:00:02.128) 0:00:55.808 ******** 2026-04-02 01:04:38.703838 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:04:38.703843 | orchestrator | 2026-04-02 01:04:38.703848 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-04-02 01:04:38.703853 | orchestrator | Thursday 02 April 2026 01:01:24 +0000 (0:00:00.230) 0:00:56.038 ******** 2026-04-02 01:04:38.703858 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:04:38.703864 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:04:38.703874 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:04:38.703879 | orchestrator | skipping: [testbed-node-3] 2026-04-02 01:04:38.703885 | orchestrator | skipping: [testbed-node-4] 2026-04-02 01:04:38.703890 | orchestrator | skipping: [testbed-node-5] 2026-04-02 01:04:38.703895 | orchestrator | 2026-04-02 01:04:38.703900 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-04-02 01:04:38.703906 | orchestrator | Thursday 02 April 2026 01:01:25 +0000 (0:00:00.510) 0:00:56.549 ******** 2026-04-02 01:04:38.703911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-02 01:04:38.703916 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:04:38.703921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-02 01:04:38.703926 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:04:38.703932 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-02 01:04:38.703938 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:04:38.703948 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-02 01:04:38.703957 | orchestrator | skipping: [testbed-node-3] 2026-04-02 01:04:38.703962 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-02 01:04:38.703968 | orchestrator | skipping: [testbed-node-5] 2026-04-02 01:04:38.703974 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-02 01:04:38.703979 | orchestrator | skipping: [testbed-node-4] 2026-04-02 01:04:38.703985 | orchestrator | 2026-04-02 01:04:38.703990 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-04-02 01:04:38.703996 | orchestrator | Thursday 02 April 2026 01:01:27 +0000 (0:00:01.990) 0:00:58.540 ******** 2026-04-02 01:04:38.704001 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-02 01:04:38.704008 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-02 01:04:38.704018 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-02 01:04:38.704029 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-02 01:04:38.704035 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-02 01:04:38.704041 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-02 01:04:38.704047 | orchestrator | 2026-04-02 01:04:38.704052 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-04-02 01:04:38.704058 | orchestrator | Thursday 02 April 2026 01:01:30 +0000 (0:00:02.733) 0:01:01.273 ******** 2026-04-02 01:04:38.704064 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-02 01:04:38.704087 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-02 01:04:38.704094 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-02 01:04:38.704099 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-02 01:04:38.704104 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-02 01:04:38.704109 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-02 01:04:38.704118 | orchestrator | 2026-04-02 01:04:38.704123 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-04-02 01:04:38.704128 | orchestrator | Thursday 02 April 2026 01:01:36 +0000 (0:00:06.219) 0:01:07.493 ******** 2026-04-02 01:04:38.704137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-02 01:04:38.704144 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:04:38.704151 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-02 01:04:38.704157 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:04:38.704161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-02 01:04:38.704165 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:04:38.704169 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-02 01:04:38.704175 | orchestrator | skipping: [testbed-node-3] 2026-04-02 01:04:38.704179 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-02 01:04:38.704183 | orchestrator | skipping: [testbed-node-4] 2026-04-02 01:04:38.704190 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-02 01:04:38.704194 | orchestrator | skipping: [testbed-node-5] 2026-04-02 01:04:38.704197 | orchestrator | 2026-04-02 01:04:38.704201 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-04-02 01:04:38.704205 | orchestrator | Thursday 02 April 2026 01:01:38 +0000 (0:00:02.327) 0:01:09.821 ******** 2026-04-02 01:04:38.704208 | orchestrator | skipping: [testbed-node-5] 2026-04-02 01:04:38.704212 | orchestrator | skipping: [testbed-node-4] 2026-04-02 01:04:38.704215 | orchestrator | skipping: [testbed-node-3] 2026-04-02 01:04:38.704219 | orchestrator | changed: [testbed-node-1] 2026-04-02 01:04:38.704223 | orchestrator | changed: [testbed-node-2] 2026-04-02 01:04:38.704228 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:04:38.704233 | orchestrator | 2026-04-02 01:04:38.704238 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-04-02 01:04:38.704244 | orchestrator | Thursday 02 April 2026 01:01:42 +0000 (0:00:03.406) 0:01:13.227 ******** 2026-04-02 01:04:38.704250 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-02 01:04:38.704257 | orchestrator | skipping: [testbed-node-5] 2026-04-02 01:04:38.704261 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-02 01:04:38.704268 | orchestrator | skipping: [testbed-node-3] 2026-04-02 01:04:38.704272 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-02 01:04:38.704276 | orchestrator | skipping: [testbed-node-4] 2026-04-02 01:04:38.704283 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-02 01:04:38.704287 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-02 01:04:38.704291 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-02 01:04:38.704296 | orchestrator | 2026-04-02 01:04:38.704302 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2026-04-02 01:04:38.704307 | orchestrator | Thursday 02 April 2026 01:01:46 +0000 (0:00:03.859) 0:01:17.087 ******** 2026-04-02 01:04:38.704310 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:04:38.704314 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:04:38.704318 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:04:38.704322 | orchestrator | skipping: [testbed-node-3] 2026-04-02 01:04:38.704326 | orchestrator | skipping: [testbed-node-4] 2026-04-02 01:04:38.704329 | orchestrator | skipping: [testbed-node-5] 2026-04-02 01:04:38.704333 | orchestrator | 2026-04-02 01:04:38.704337 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-04-02 01:04:38.704341 | orchestrator | Thursday 02 April 2026 01:01:49 +0000 (0:00:03.165) 0:01:20.253 ******** 2026-04-02 01:04:38.704344 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:04:38.704348 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:04:38.704351 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:04:38.704355 | orchestrator | skipping: [testbed-node-4] 2026-04-02 01:04:38.704358 | orchestrator | skipping: [testbed-node-5] 2026-04-02 01:04:38.704362 | orchestrator | skipping: [testbed-node-3] 2026-04-02 01:04:38.704366 | orchestrator | 2026-04-02 01:04:38.704369 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-04-02 01:04:38.704373 | orchestrator | Thursday 02 April 2026 01:01:52 +0000 (0:00:03.280) 0:01:23.533 ******** 2026-04-02 01:04:38.704377 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:04:38.704381 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:04:38.704385 | orchestrator | skipping: [testbed-node-4] 2026-04-02 01:04:38.704389 | orchestrator | skipping: [testbed-node-3] 2026-04-02 01:04:38.704392 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:04:38.704396 | orchestrator | skipping: [testbed-node-5] 2026-04-02 01:04:38.704400 | orchestrator | 2026-04-02 01:04:38.704403 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-04-02 01:04:38.704407 | orchestrator | Thursday 02 April 2026 01:01:55 +0000 (0:00:02.661) 0:01:26.195 ******** 2026-04-02 01:04:38.704411 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:04:38.704414 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:04:38.704418 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:04:38.704422 | orchestrator | skipping: [testbed-node-3] 2026-04-02 01:04:38.704425 | orchestrator | skipping: [testbed-node-5] 2026-04-02 01:04:38.704428 | orchestrator | skipping: [testbed-node-4] 2026-04-02 01:04:38.704431 | orchestrator | 2026-04-02 01:04:38.704435 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-04-02 01:04:38.704438 | orchestrator | Thursday 02 April 2026 01:01:57 +0000 (0:00:02.509) 0:01:28.704 ******** 2026-04-02 01:04:38.704441 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:04:38.704444 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:04:38.704447 | orchestrator | skipping: [testbed-node-3] 2026-04-02 01:04:38.704450 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:04:38.704455 | orchestrator | skipping: [testbed-node-4] 2026-04-02 01:04:38.704459 | orchestrator | skipping: [testbed-node-5] 2026-04-02 01:04:38.704462 | orchestrator | 2026-04-02 01:04:38.704465 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-04-02 01:04:38.704468 | orchestrator | Thursday 02 April 2026 01:02:01 +0000 (0:00:04.100) 0:01:32.804 ******** 2026-04-02 01:04:38.704471 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:04:38.704474 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:04:38.704477 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:04:38.704480 | orchestrator | skipping: [testbed-node-4] 2026-04-02 01:04:38.704483 | orchestrator | skipping: [testbed-node-3] 2026-04-02 01:04:38.704486 | orchestrator | skipping: [testbed-node-5] 2026-04-02 01:04:38.704490 | orchestrator | 2026-04-02 01:04:38.704493 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-04-02 01:04:38.704496 | orchestrator | Thursday 02 April 2026 01:02:04 +0000 (0:00:03.112) 0:01:35.917 ******** 2026-04-02 01:04:38.704502 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-02 01:04:38.704506 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:04:38.704509 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-02 01:04:38.704512 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:04:38.704515 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-02 01:04:38.704518 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:04:38.704521 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-02 01:04:38.704524 | orchestrator | skipping: [testbed-node-4] 2026-04-02 01:04:38.704527 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-02 01:04:38.704543 | orchestrator | skipping: [testbed-node-3] 2026-04-02 01:04:38.704549 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-02 01:04:38.704555 | orchestrator | skipping: [testbed-node-5] 2026-04-02 01:04:38.704558 | orchestrator | 2026-04-02 01:04:38.704561 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-04-02 01:04:38.704564 | orchestrator | Thursday 02 April 2026 01:02:08 +0000 (0:00:03.671) 0:01:39.588 ******** 2026-04-02 01:04:38.704568 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-02 01:04:38.704571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-02 01:04:38.704574 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:04:38.704578 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:04:38.704584 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-02 01:04:38.704592 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:04:38.704600 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-02 01:04:38.704607 | orchestrator | skipping: [testbed-node-4] 2026-04-02 01:04:38.704613 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-02 01:04:38.704618 | orchestrator | skipping: [testbed-node-3] 2026-04-02 01:04:38.704623 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-02 01:04:38.704628 | orchestrator | skipping: [testbed-node-5] 2026-04-02 01:04:38.704633 | orchestrator | 2026-04-02 01:04:38.704637 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-04-02 01:04:38.704642 | orchestrator | Thursday 02 April 2026 01:02:12 +0000 (0:00:03.514) 0:01:43.103 ******** 2026-04-02 01:04:38.704647 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-02 01:04:38.704653 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:04:38.704663 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-02 01:04:38.704677 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:04:38.704682 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-02 01:04:38.704688 | orchestrator | skipping: [testbed-node-3] 2026-04-02 01:04:38.704694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-02 01:04:38.704700 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:04:38.704705 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-02 01:04:38.704710 | orchestrator | skipping: [testbed-node-5] 2026-04-02 01:04:38.704716 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-02 01:04:38.704725 | orchestrator | skipping: [testbed-node-4] 2026-04-02 01:04:38.704730 | orchestrator | 2026-04-02 01:04:38.704735 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-04-02 01:04:38.704740 | orchestrator | Thursday 02 April 2026 01:02:14 +0000 (0:00:02.708) 0:01:45.812 ******** 2026-04-02 01:04:38.704746 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:04:38.704754 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:04:38.704760 | orchestrator | skipping: [testbed-node-4] 2026-04-02 01:04:38.704765 | orchestrator | skipping: [testbed-node-3] 2026-04-02 01:04:38.704771 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:04:38.704776 | orchestrator | skipping: [testbed-node-5] 2026-04-02 01:04:38.704781 | orchestrator | 2026-04-02 01:04:38.704787 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-04-02 01:04:38.704792 | orchestrator | Thursday 02 April 2026 01:02:17 +0000 (0:00:02.774) 0:01:48.586 ******** 2026-04-02 01:04:38.704797 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:04:38.704802 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:04:38.704808 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:04:38.704813 | orchestrator | changed: [testbed-node-4] 2026-04-02 01:04:38.704819 | orchestrator | changed: [testbed-node-3] 2026-04-02 01:04:38.704824 | orchestrator | changed: [testbed-node-5] 2026-04-02 01:04:38.704829 | orchestrator | 2026-04-02 01:04:38.704835 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-04-02 01:04:38.704840 | orchestrator | Thursday 02 April 2026 01:02:21 +0000 (0:00:03.682) 0:01:52.269 ******** 2026-04-02 01:04:38.704846 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:04:38.704852 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:04:38.704857 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:04:38.704863 | orchestrator | skipping: [testbed-node-3] 2026-04-02 01:04:38.704868 | orchestrator | skipping: [testbed-node-4] 2026-04-02 01:04:38.704874 | orchestrator | skipping: [testbed-node-5] 2026-04-02 01:04:38.704879 | orchestrator | 2026-04-02 01:04:38.704884 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-04-02 01:04:38.704889 | orchestrator | Thursday 02 April 2026 01:02:24 +0000 (0:00:03.003) 0:01:55.272 ******** 2026-04-02 01:04:38.704895 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:04:38.704900 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:04:38.704905 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:04:38.704910 | orchestrator | skipping: [testbed-node-4] 2026-04-02 01:04:38.704916 | orchestrator | skipping: [testbed-node-3] 2026-04-02 01:04:38.704921 | orchestrator | skipping: [testbed-node-5] 2026-04-02 01:04:38.704926 | orchestrator | 2026-04-02 01:04:38.704931 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-04-02 01:04:38.704936 | orchestrator | Thursday 02 April 2026 01:02:26 +0000 (0:00:02.413) 0:01:57.685 ******** 2026-04-02 01:04:38.704942 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:04:38.704947 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:04:38.704953 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:04:38.704958 | orchestrator | skipping: [testbed-node-4] 2026-04-02 01:04:38.704963 | orchestrator | skipping: [testbed-node-3] 2026-04-02 01:04:38.704970 | orchestrator | skipping: [testbed-node-5] 2026-04-02 01:04:38.704975 | orchestrator | 2026-04-02 01:04:38.704981 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-04-02 01:04:38.704986 | orchestrator | Thursday 02 April 2026 01:02:28 +0000 (0:00:01.859) 0:01:59.545 ******** 2026-04-02 01:04:38.704991 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:04:38.704997 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:04:38.705002 | orchestrator | skipping: [testbed-node-3] 2026-04-02 01:04:38.705013 | orchestrator | skipping: [testbed-node-4] 2026-04-02 01:04:38.705018 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:04:38.705023 | orchestrator | skipping: [testbed-node-5] 2026-04-02 01:04:38.705028 | orchestrator | 2026-04-02 01:04:38.705034 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-04-02 01:04:38.705039 | orchestrator | Thursday 02 April 2026 01:02:30 +0000 (0:00:02.354) 0:02:01.900 ******** 2026-04-02 01:04:38.705045 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:04:38.705050 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:04:38.705055 | orchestrator | skipping: [testbed-node-3] 2026-04-02 01:04:38.705159 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:04:38.705165 | orchestrator | skipping: [testbed-node-4] 2026-04-02 01:04:38.705170 | orchestrator | skipping: [testbed-node-5] 2026-04-02 01:04:38.705176 | orchestrator | 2026-04-02 01:04:38.705181 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-04-02 01:04:38.705186 | orchestrator | Thursday 02 April 2026 01:02:33 +0000 (0:00:02.262) 0:02:04.163 ******** 2026-04-02 01:04:38.705192 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:04:38.705198 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:04:38.705203 | orchestrator | skipping: [testbed-node-4] 2026-04-02 01:04:38.705209 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:04:38.705215 | orchestrator | skipping: [testbed-node-3] 2026-04-02 01:04:38.705220 | orchestrator | skipping: [testbed-node-5] 2026-04-02 01:04:38.705226 | orchestrator | 2026-04-02 01:04:38.705231 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-04-02 01:04:38.705236 | orchestrator | Thursday 02 April 2026 01:02:34 +0000 (0:00:01.570) 0:02:05.733 ******** 2026-04-02 01:04:38.705241 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:04:38.705247 | orchestrator | skipping: [testbed-node-3] 2026-04-02 01:04:38.705252 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:04:38.705257 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:04:38.705262 | orchestrator | skipping: [testbed-node-4] 2026-04-02 01:04:38.705268 | orchestrator | skipping: [testbed-node-5] 2026-04-02 01:04:38.705273 | orchestrator | 2026-04-02 01:04:38.705278 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-04-02 01:04:38.705284 | orchestrator | Thursday 02 April 2026 01:02:37 +0000 (0:00:02.855) 0:02:08.589 ******** 2026-04-02 01:04:38.705289 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-02 01:04:38.705296 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:04:38.705301 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-02 01:04:38.705307 | orchestrator | skipping: [testbed-node-3] 2026-04-02 01:04:38.705313 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-02 01:04:38.705319 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:04:38.705325 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-02 01:04:38.705331 | orchestrator | skipping: [testbed-node-4] 2026-04-02 01:04:38.705342 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-02 01:04:38.705348 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:04:38.705354 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-02 01:04:38.705359 | orchestrator | skipping: [testbed-node-5] 2026-04-02 01:04:38.705364 | orchestrator | 2026-04-02 01:04:38.705369 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-04-02 01:04:38.705375 | orchestrator | Thursday 02 April 2026 01:02:39 +0000 (0:00:01.905) 0:02:10.494 ******** 2026-04-02 01:04:38.705381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-02 01:04:38.705394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-02 01:04:38.705451 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:04:38.705461 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:04:38.705467 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-02 01:04:38.705473 | orchestrator | skipping: [testbed-node-3] 2026-04-02 01:04:38.705478 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-02 01:04:38.705484 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:04:38.705495 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-02 01:04:38.705505 | orchestrator | skipping: [testbed-node-5] 2026-04-02 01:04:38.705511 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-02 01:04:38.705517 | orchestrator | skipping: [testbed-node-4] 2026-04-02 01:04:38.705522 | orchestrator | 2026-04-02 01:04:38.705527 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2026-04-02 01:04:38.705565 | orchestrator | Thursday 02 April 2026 01:02:41 +0000 (0:00:02.230) 0:02:12.724 ******** 2026-04-02 01:04:38.705571 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-02 01:04:38.705576 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-02 01:04:38.705585 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-02 01:04:38.705591 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-02 01:04:38.705602 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-02 01:04:38.705608 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-02 01:04:38.705614 | orchestrator | 2026-04-02 01:04:38.705619 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-02 01:04:38.705625 | orchestrator | Thursday 02 April 2026 01:02:44 +0000 (0:00:03.149) 0:02:15.873 ******** 2026-04-02 01:04:38.705630 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:04:38.705635 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:04:38.705641 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:04:38.705647 | orchestrator | skipping: [testbed-node-3] 2026-04-02 01:04:38.705652 | orchestrator | skipping: [testbed-node-4] 2026-04-02 01:04:38.705657 | orchestrator | skipping: [testbed-node-5] 2026-04-02 01:04:38.705662 | orchestrator | 2026-04-02 01:04:38.705667 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2026-04-02 01:04:38.705673 | orchestrator | Thursday 02 April 2026 01:02:45 +0000 (0:00:00.548) 0:02:16.422 ******** 2026-04-02 01:04:38.705678 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:04:38.705683 | orchestrator | 2026-04-02 01:04:38.705689 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2026-04-02 01:04:38.705694 | orchestrator | Thursday 02 April 2026 01:02:47 +0000 (0:00:02.165) 0:02:18.587 ******** 2026-04-02 01:04:38.705699 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:04:38.705704 | orchestrator | 2026-04-02 01:04:38.705710 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2026-04-02 01:04:38.705715 | orchestrator | Thursday 02 April 2026 01:02:49 +0000 (0:00:02.245) 0:02:20.833 ******** 2026-04-02 01:04:38.705721 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:04:38.705726 | orchestrator | 2026-04-02 01:04:38.705733 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-02 01:04:38.705743 | orchestrator | Thursday 02 April 2026 01:03:30 +0000 (0:00:41.119) 0:03:01.953 ******** 2026-04-02 01:04:38.705748 | orchestrator | 2026-04-02 01:04:38.705754 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-02 01:04:38.705759 | orchestrator | Thursday 02 April 2026 01:03:30 +0000 (0:00:00.094) 0:03:02.047 ******** 2026-04-02 01:04:38.705764 | orchestrator | 2026-04-02 01:04:38.705769 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-02 01:04:38.705774 | orchestrator | Thursday 02 April 2026 01:03:31 +0000 (0:00:00.076) 0:03:02.123 ******** 2026-04-02 01:04:38.705779 | orchestrator | 2026-04-02 01:04:38.705784 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-02 01:04:38.705790 | orchestrator | Thursday 02 April 2026 01:03:31 +0000 (0:00:00.063) 0:03:02.187 ******** 2026-04-02 01:04:38.705795 | orchestrator | 2026-04-02 01:04:38.705804 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-02 01:04:38.705809 | orchestrator | Thursday 02 April 2026 01:03:31 +0000 (0:00:00.080) 0:03:02.267 ******** 2026-04-02 01:04:38.705815 | orchestrator | 2026-04-02 01:04:38.705820 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-02 01:04:38.705825 | orchestrator | Thursday 02 April 2026 01:03:31 +0000 (0:00:00.064) 0:03:02.332 ******** 2026-04-02 01:04:38.705830 | orchestrator | 2026-04-02 01:04:38.705835 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-04-02 01:04:38.705840 | orchestrator | Thursday 02 April 2026 01:03:31 +0000 (0:00:00.067) 0:03:02.399 ******** 2026-04-02 01:04:38.705846 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:04:38.705851 | orchestrator | changed: [testbed-node-2] 2026-04-02 01:04:38.705856 | orchestrator | changed: [testbed-node-1] 2026-04-02 01:04:38.705862 | orchestrator | 2026-04-02 01:04:38.705867 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-04-02 01:04:38.705873 | orchestrator | Thursday 02 April 2026 01:03:54 +0000 (0:00:22.846) 0:03:25.245 ******** 2026-04-02 01:04:38.705878 | orchestrator | changed: [testbed-node-4] 2026-04-02 01:04:38.705886 | orchestrator | changed: [testbed-node-3] 2026-04-02 01:04:38.705891 | orchestrator | changed: [testbed-node-5] 2026-04-02 01:04:38.705896 | orchestrator | 2026-04-02 01:04:38.705901 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-02 01:04:38.705907 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-02 01:04:38.705913 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-04-02 01:04:38.705919 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-04-02 01:04:38.705924 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-02 01:04:38.705930 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-02 01:04:38.705935 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-02 01:04:38.705940 | orchestrator | 2026-04-02 01:04:38.705945 | orchestrator | 2026-04-02 01:04:38.705951 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-02 01:04:38.705956 | orchestrator | Thursday 02 April 2026 01:04:37 +0000 (0:00:43.018) 0:04:08.264 ******** 2026-04-02 01:04:38.705962 | orchestrator | =============================================================================== 2026-04-02 01:04:38.705968 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 43.02s 2026-04-02 01:04:38.705978 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 41.12s 2026-04-02 01:04:38.705984 | orchestrator | neutron : Restart neutron-server container ----------------------------- 22.85s 2026-04-02 01:04:38.705989 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 7.19s 2026-04-02 01:04:38.705995 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.48s 2026-04-02 01:04:38.706000 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 6.22s 2026-04-02 01:04:38.706005 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 4.17s 2026-04-02 01:04:38.706010 | orchestrator | neutron : Copying over eswitchd.conf ------------------------------------ 4.10s 2026-04-02 01:04:38.706049 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.90s 2026-04-02 01:04:38.706055 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 3.86s 2026-04-02 01:04:38.706061 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 3.68s 2026-04-02 01:04:38.706067 | orchestrator | neutron : Copying over dnsmasq.conf ------------------------------------- 3.67s 2026-04-02 01:04:38.706072 | orchestrator | neutron : Copying over l3_agent.ini ------------------------------------- 3.51s 2026-04-02 01:04:38.706078 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 3.51s 2026-04-02 01:04:38.706084 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.48s 2026-04-02 01:04:38.706090 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 3.41s 2026-04-02 01:04:38.706096 | orchestrator | neutron : Copying over openvswitch_agent.ini ---------------------------- 3.28s 2026-04-02 01:04:38.706102 | orchestrator | neutron : Copying over linuxbridge_agent.ini ---------------------------- 3.17s 2026-04-02 01:04:38.706108 | orchestrator | neutron : Check neutron containers -------------------------------------- 3.15s 2026-04-02 01:04:38.706114 | orchestrator | neutron : Copying over dhcp_agent.ini ----------------------------------- 3.11s 2026-04-02 01:04:38.706120 | orchestrator | 2026-04-02 01:04:38 | INFO  | Task 3900a125-c4f4-4fad-93c1-61992c356777 is in state STARTED 2026-04-02 01:04:38.706125 | orchestrator | 2026-04-02 01:04:38 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:04:41.752127 | orchestrator | 2026-04-02 01:04:41 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:04:41.754091 | orchestrator | 2026-04-02 01:04:41 | INFO  | Task c04f0d16-42fb-4c05-b604-3ab7be80e289 is in state STARTED 2026-04-02 01:04:41.756914 | orchestrator | 2026-04-02 01:04:41 | INFO  | Task b1ded223-f030-4800-bdf3-9ef9ec9b88fd is in state STARTED 2026-04-02 01:04:41.759119 | orchestrator | 2026-04-02 01:04:41 | INFO  | Task 3900a125-c4f4-4fad-93c1-61992c356777 is in state STARTED 2026-04-02 01:04:41.759165 | orchestrator | 2026-04-02 01:04:41 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:04:44.789187 | orchestrator | 2026-04-02 01:04:44 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:04:44.790267 | orchestrator | 2026-04-02 01:04:44 | INFO  | Task c04f0d16-42fb-4c05-b604-3ab7be80e289 is in state STARTED 2026-04-02 01:04:44.791351 | orchestrator | 2026-04-02 01:04:44 | INFO  | Task b1ded223-f030-4800-bdf3-9ef9ec9b88fd is in state STARTED 2026-04-02 01:04:44.792217 | orchestrator | 2026-04-02 01:04:44 | INFO  | Task 3900a125-c4f4-4fad-93c1-61992c356777 is in state STARTED 2026-04-02 01:04:44.792252 | orchestrator | 2026-04-02 01:04:44 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:04:47.818816 | orchestrator | 2026-04-02 01:04:47 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:04:47.819491 | orchestrator | 2026-04-02 01:04:47 | INFO  | Task c04f0d16-42fb-4c05-b604-3ab7be80e289 is in state STARTED 2026-04-02 01:04:47.820558 | orchestrator | 2026-04-02 01:04:47 | INFO  | Task b1ded223-f030-4800-bdf3-9ef9ec9b88fd is in state STARTED 2026-04-02 01:04:47.821699 | orchestrator | 2026-04-02 01:04:47 | INFO  | Task 3900a125-c4f4-4fad-93c1-61992c356777 is in state STARTED 2026-04-02 01:04:47.821935 | orchestrator | 2026-04-02 01:04:47 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:04:50.861300 | orchestrator | 2026-04-02 01:04:50 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:04:50.864551 | orchestrator | 2026-04-02 01:04:50 | INFO  | Task c04f0d16-42fb-4c05-b604-3ab7be80e289 is in state STARTED 2026-04-02 01:04:50.866272 | orchestrator | 2026-04-02 01:04:50 | INFO  | Task b1ded223-f030-4800-bdf3-9ef9ec9b88fd is in state STARTED 2026-04-02 01:04:50.868134 | orchestrator | 2026-04-02 01:04:50 | INFO  | Task 3900a125-c4f4-4fad-93c1-61992c356777 is in state STARTED 2026-04-02 01:04:50.868273 | orchestrator | 2026-04-02 01:04:50 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:04:53.917141 | orchestrator | 2026-04-02 01:04:53 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:04:53.920993 | orchestrator | 2026-04-02 01:04:53 | INFO  | Task c04f0d16-42fb-4c05-b604-3ab7be80e289 is in state STARTED 2026-04-02 01:04:53.921802 | orchestrator | 2026-04-02 01:04:53 | INFO  | Task b1ded223-f030-4800-bdf3-9ef9ec9b88fd is in state STARTED 2026-04-02 01:04:53.925783 | orchestrator | 2026-04-02 01:04:53 | INFO  | Task 3900a125-c4f4-4fad-93c1-61992c356777 is in state STARTED 2026-04-02 01:04:53.925875 | orchestrator | 2026-04-02 01:04:53 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:04:56.969016 | orchestrator | 2026-04-02 01:04:56 | INFO  | Task fc6cd18e-e7fe-431a-a38a-eca351309cf0 is in state STARTED 2026-04-02 01:04:56.969947 | orchestrator | 2026-04-02 01:04:56 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:04:56.970882 | orchestrator | 2026-04-02 01:04:56 | INFO  | Task c04f0d16-42fb-4c05-b604-3ab7be80e289 is in state STARTED 2026-04-02 01:04:56.971716 | orchestrator | 2026-04-02 01:04:56 | INFO  | Task b1ded223-f030-4800-bdf3-9ef9ec9b88fd is in state STARTED 2026-04-02 01:04:56.973416 | orchestrator | 2026-04-02 01:04:56 | INFO  | Task 3900a125-c4f4-4fad-93c1-61992c356777 is in state SUCCESS 2026-04-02 01:04:56.976169 | orchestrator | 2026-04-02 01:04:56.976221 | orchestrator | 2026-04-02 01:04:56.976228 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-02 01:04:56.976234 | orchestrator | 2026-04-02 01:04:56.976239 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-02 01:04:56.976244 | orchestrator | Thursday 02 April 2026 01:03:46 +0000 (0:00:00.339) 0:00:00.339 ******** 2026-04-02 01:04:56.976250 | orchestrator | ok: [testbed-node-0] 2026-04-02 01:04:56.976255 | orchestrator | ok: [testbed-node-1] 2026-04-02 01:04:56.976260 | orchestrator | ok: [testbed-node-2] 2026-04-02 01:04:56.976265 | orchestrator | 2026-04-02 01:04:56.976269 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-02 01:04:56.976273 | orchestrator | Thursday 02 April 2026 01:03:46 +0000 (0:00:00.381) 0:00:00.720 ******** 2026-04-02 01:04:56.976278 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-04-02 01:04:56.976284 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-04-02 01:04:56.976288 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-04-02 01:04:56.976293 | orchestrator | 2026-04-02 01:04:56.976297 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-04-02 01:04:56.976303 | orchestrator | 2026-04-02 01:04:56.976307 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-04-02 01:04:56.976312 | orchestrator | Thursday 02 April 2026 01:03:47 +0000 (0:00:00.380) 0:00:01.101 ******** 2026-04-02 01:04:56.976346 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 01:04:56.976353 | orchestrator | 2026-04-02 01:04:56.976358 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2026-04-02 01:04:56.976363 | orchestrator | Thursday 02 April 2026 01:03:47 +0000 (0:00:00.474) 0:00:01.575 ******** 2026-04-02 01:04:56.976368 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2026-04-02 01:04:56.976374 | orchestrator | 2026-04-02 01:04:56.976378 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2026-04-02 01:04:56.976384 | orchestrator | Thursday 02 April 2026 01:03:52 +0000 (0:00:04.426) 0:00:06.002 ******** 2026-04-02 01:04:56.976389 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-04-02 01:04:56.976394 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-04-02 01:04:56.976399 | orchestrator | 2026-04-02 01:04:56.976404 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-04-02 01:04:56.976409 | orchestrator | Thursday 02 April 2026 01:03:59 +0000 (0:00:06.881) 0:00:12.883 ******** 2026-04-02 01:04:56.976414 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-02 01:04:56.976420 | orchestrator | 2026-04-02 01:04:56.976425 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-04-02 01:04:56.976430 | orchestrator | Thursday 02 April 2026 01:04:02 +0000 (0:00:03.290) 0:00:16.174 ******** 2026-04-02 01:04:56.976435 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-04-02 01:04:56.976440 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-02 01:04:56.976444 | orchestrator | 2026-04-02 01:04:56.976450 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-04-02 01:04:56.976454 | orchestrator | Thursday 02 April 2026 01:04:06 +0000 (0:00:03.965) 0:00:20.139 ******** 2026-04-02 01:04:56.976459 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-02 01:04:56.976464 | orchestrator | 2026-04-02 01:04:56.976469 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2026-04-02 01:04:56.976474 | orchestrator | Thursday 02 April 2026 01:04:09 +0000 (0:00:02.850) 0:00:22.990 ******** 2026-04-02 01:04:56.976478 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2026-04-02 01:04:56.976484 | orchestrator | 2026-04-02 01:04:56.976489 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-04-02 01:04:56.976493 | orchestrator | Thursday 02 April 2026 01:04:12 +0000 (0:00:03.388) 0:00:26.378 ******** 2026-04-02 01:04:56.976499 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:04:56.976536 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:04:56.976542 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:04:56.976547 | orchestrator | 2026-04-02 01:04:56.976553 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-04-02 01:04:56.976558 | orchestrator | Thursday 02 April 2026 01:04:12 +0000 (0:00:00.223) 0:00:26.601 ******** 2026-04-02 01:04:56.976566 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-02 01:04:56.976590 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-02 01:04:56.976596 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-02 01:04:56.976602 | orchestrator | 2026-04-02 01:04:56.976607 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-04-02 01:04:56.976612 | orchestrator | Thursday 02 April 2026 01:04:14 +0000 (0:00:01.430) 0:00:28.031 ******** 2026-04-02 01:04:56.976618 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:04:56.976622 | orchestrator | 2026-04-02 01:04:56.976627 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-04-02 01:04:56.976632 | orchestrator | Thursday 02 April 2026 01:04:14 +0000 (0:00:00.112) 0:00:28.144 ******** 2026-04-02 01:04:56.976637 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:04:56.976642 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:04:56.976647 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:04:56.976651 | orchestrator | 2026-04-02 01:04:56.976656 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-04-02 01:04:56.976661 | orchestrator | Thursday 02 April 2026 01:04:14 +0000 (0:00:00.257) 0:00:28.401 ******** 2026-04-02 01:04:56.976666 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 01:04:56.976670 | orchestrator | 2026-04-02 01:04:56.976675 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-04-02 01:04:56.976680 | orchestrator | Thursday 02 April 2026 01:04:15 +0000 (0:00:00.661) 0:00:29.062 ******** 2026-04-02 01:04:56.976685 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-02 01:04:56.976700 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-02 01:04:56.976706 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-02 01:04:56.976712 | orchestrator | 2026-04-02 01:04:56.976717 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-04-02 01:04:56.976721 | orchestrator | Thursday 02 April 2026 01:04:16 +0000 (0:00:01.531) 0:00:30.594 ******** 2026-04-02 01:04:56.976726 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-02 01:04:56.976731 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:04:56.976736 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-02 01:04:56.976745 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:04:56.976753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-02 01:04:56.976758 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:04:56.976763 | orchestrator | 2026-04-02 01:04:56.976768 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-04-02 01:04:56.976773 | orchestrator | Thursday 02 April 2026 01:04:17 +0000 (0:00:00.395) 0:00:30.989 ******** 2026-04-02 01:04:56.976778 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-02 01:04:56.976783 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:04:56.976788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-02 01:04:56.976793 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:04:56.976807 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-02 01:04:56.976816 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:04:56.976822 | orchestrator | 2026-04-02 01:04:56.976827 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-04-02 01:04:56.976833 | orchestrator | Thursday 02 April 2026 01:04:17 +0000 (0:00:00.503) 0:00:31.493 ******** 2026-04-02 01:04:56.976841 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-02 01:04:56.976847 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-02 01:04:56.976853 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-02 01:04:56.976858 | orchestrator | 2026-04-02 01:04:56.976861 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-04-02 01:04:56.976864 | orchestrator | Thursday 02 April 2026 01:04:19 +0000 (0:00:01.871) 0:00:33.365 ******** 2026-04-02 01:04:56.976870 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-02 01:04:56.976877 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-02 01:04:56.976884 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-02 01:04:56.976887 | orchestrator | 2026-04-02 01:04:56.976890 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-04-02 01:04:56.976893 | orchestrator | Thursday 02 April 2026 01:04:22 +0000 (0:00:02.797) 0:00:36.163 ******** 2026-04-02 01:04:56.976897 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-04-02 01:04:56.976900 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-04-02 01:04:56.976903 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-04-02 01:04:56.976906 | orchestrator | 2026-04-02 01:04:56.976910 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-04-02 01:04:56.976913 | orchestrator | Thursday 02 April 2026 01:04:24 +0000 (0:00:01.918) 0:00:38.081 ******** 2026-04-02 01:04:56.976916 | orchestrator | changed: [testbed-node-1] 2026-04-02 01:04:56.976919 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:04:56.976922 | orchestrator | changed: [testbed-node-2] 2026-04-02 01:04:56.976925 | orchestrator | 2026-04-02 01:04:56.976928 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-04-02 01:04:56.976931 | orchestrator | Thursday 02 April 2026 01:04:25 +0000 (0:00:01.226) 0:00:39.308 ******** 2026-04-02 01:04:56.976935 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-02 01:04:56.976940 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:04:56.976945 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-02 01:04:56.976949 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:04:56.976955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-02 01:04:56.976958 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:04:56.976961 | orchestrator | 2026-04-02 01:04:56.976964 | orchestrator | TASK [placement : Check placement containers] ********************************** 2026-04-02 01:04:56.976968 | orchestrator | Thursday 02 April 2026 01:04:26 +0000 (0:00:00.627) 0:00:39.936 ******** 2026-04-02 01:04:56.976971 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-02 01:04:56.976975 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-02 01:04:56.976982 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-02 01:04:56.976985 | orchestrator | 2026-04-02 01:04:56.976988 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-04-02 01:04:56.976992 | orchestrator | Thursday 02 April 2026 01:04:27 +0000 (0:00:01.029) 0:00:40.966 ******** 2026-04-02 01:04:56.976995 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:04:56.976998 | orchestrator | 2026-04-02 01:04:56.977001 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-04-02 01:04:56.977004 | orchestrator | Thursday 02 April 2026 01:04:29 +0000 (0:00:01.915) 0:00:42.881 ******** 2026-04-02 01:04:56.977007 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:04:56.977010 | orchestrator | 2026-04-02 01:04:56.977013 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-04-02 01:04:56.977016 | orchestrator | Thursday 02 April 2026 01:04:31 +0000 (0:00:02.097) 0:00:44.979 ******** 2026-04-02 01:04:56.977020 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:04:56.977023 | orchestrator | 2026-04-02 01:04:56.977026 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-04-02 01:04:56.977029 | orchestrator | Thursday 02 April 2026 01:04:43 +0000 (0:00:12.461) 0:00:57.440 ******** 2026-04-02 01:04:56.977032 | orchestrator | 2026-04-02 01:04:56.977035 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-04-02 01:04:56.977038 | orchestrator | Thursday 02 April 2026 01:04:43 +0000 (0:00:00.073) 0:00:57.514 ******** 2026-04-02 01:04:56.977041 | orchestrator | 2026-04-02 01:04:56.977046 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-04-02 01:04:56.977050 | orchestrator | Thursday 02 April 2026 01:04:43 +0000 (0:00:00.134) 0:00:57.648 ******** 2026-04-02 01:04:56.977053 | orchestrator | 2026-04-02 01:04:56.977056 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-04-02 01:04:56.977059 | orchestrator | Thursday 02 April 2026 01:04:44 +0000 (0:00:00.098) 0:00:57.746 ******** 2026-04-02 01:04:56.977062 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:04:56.977071 | orchestrator | changed: [testbed-node-2] 2026-04-02 01:04:56.977074 | orchestrator | changed: [testbed-node-1] 2026-04-02 01:04:56.977077 | orchestrator | 2026-04-02 01:04:56.977081 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-02 01:04:56.977084 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-02 01:04:56.977088 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-02 01:04:56.977094 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-02 01:04:56.977097 | orchestrator | 2026-04-02 01:04:56.977100 | orchestrator | 2026-04-02 01:04:56.977103 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-02 01:04:56.977106 | orchestrator | Thursday 02 April 2026 01:04:54 +0000 (0:00:10.081) 0:01:07.828 ******** 2026-04-02 01:04:56.977109 | orchestrator | =============================================================================== 2026-04-02 01:04:56.977112 | orchestrator | placement : Running placement bootstrap container ---------------------- 12.46s 2026-04-02 01:04:56.977116 | orchestrator | placement : Restart placement-api container ---------------------------- 10.08s 2026-04-02 01:04:56.977119 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.88s 2026-04-02 01:04:56.977122 | orchestrator | service-ks-register : placement | Creating services --------------------- 4.43s 2026-04-02 01:04:56.977125 | orchestrator | service-ks-register : placement | Creating users ------------------------ 3.96s 2026-04-02 01:04:56.977128 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 3.39s 2026-04-02 01:04:56.977131 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.29s 2026-04-02 01:04:56.977134 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 2.85s 2026-04-02 01:04:56.977137 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.80s 2026-04-02 01:04:56.977141 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.10s 2026-04-02 01:04:56.977144 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.92s 2026-04-02 01:04:56.977147 | orchestrator | placement : Creating placement databases -------------------------------- 1.92s 2026-04-02 01:04:56.977150 | orchestrator | placement : Copying over config.json files for services ----------------- 1.87s 2026-04-02 01:04:56.977153 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.53s 2026-04-02 01:04:56.977156 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.43s 2026-04-02 01:04:56.977159 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.23s 2026-04-02 01:04:56.977162 | orchestrator | placement : Check placement containers ---------------------------------- 1.03s 2026-04-02 01:04:56.977168 | orchestrator | placement : include_tasks ----------------------------------------------- 0.66s 2026-04-02 01:04:56.977171 | orchestrator | placement : Copying over existing policy file --------------------------- 0.63s 2026-04-02 01:04:56.977174 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.51s 2026-04-02 01:04:56.977177 | orchestrator | 2026-04-02 01:04:56 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:05:00.013949 | orchestrator | 2026-04-02 01:05:00 | INFO  | Task fc6cd18e-e7fe-431a-a38a-eca351309cf0 is in state STARTED 2026-04-02 01:05:00.014296 | orchestrator | 2026-04-02 01:05:00 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:05:00.014945 | orchestrator | 2026-04-02 01:05:00 | INFO  | Task c04f0d16-42fb-4c05-b604-3ab7be80e289 is in state STARTED 2026-04-02 01:05:00.015800 | orchestrator | 2026-04-02 01:05:00 | INFO  | Task b1ded223-f030-4800-bdf3-9ef9ec9b88fd is in state STARTED 2026-04-02 01:05:00.015832 | orchestrator | 2026-04-02 01:05:00 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:05:03.053676 | orchestrator | 2026-04-02 01:05:03 | INFO  | Task fc6cd18e-e7fe-431a-a38a-eca351309cf0 is in state SUCCESS 2026-04-02 01:05:03.055996 | orchestrator | 2026-04-02 01:05:03 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:05:03.059010 | orchestrator | 2026-04-02 01:05:03 | INFO  | Task c04f0d16-42fb-4c05-b604-3ab7be80e289 is in state STARTED 2026-04-02 01:05:03.059056 | orchestrator | 2026-04-02 01:05:03 | INFO  | Task b1ded223-f030-4800-bdf3-9ef9ec9b88fd is in state STARTED 2026-04-02 01:05:03.059081 | orchestrator | 2026-04-02 01:05:03 | INFO  | Task 4801e223-5028-488b-8554-7e3faac10e43 is in state STARTED 2026-04-02 01:05:03.059085 | orchestrator | 2026-04-02 01:05:03 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:05:06.128938 | orchestrator | 2026-04-02 01:05:06 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:05:06.129484 | orchestrator | 2026-04-02 01:05:06 | INFO  | Task c04f0d16-42fb-4c05-b604-3ab7be80e289 is in state STARTED 2026-04-02 01:05:06.130152 | orchestrator | 2026-04-02 01:05:06 | INFO  | Task b1ded223-f030-4800-bdf3-9ef9ec9b88fd is in state STARTED 2026-04-02 01:05:06.130791 | orchestrator | 2026-04-02 01:05:06 | INFO  | Task 4801e223-5028-488b-8554-7e3faac10e43 is in state STARTED 2026-04-02 01:05:06.130823 | orchestrator | 2026-04-02 01:05:06 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:05:09.156524 | orchestrator | 2026-04-02 01:05:09 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:05:09.159058 | orchestrator | 2026-04-02 01:05:09 | INFO  | Task c04f0d16-42fb-4c05-b604-3ab7be80e289 is in state STARTED 2026-04-02 01:05:09.161000 | orchestrator | 2026-04-02 01:05:09 | INFO  | Task b1ded223-f030-4800-bdf3-9ef9ec9b88fd is in state STARTED 2026-04-02 01:05:09.163321 | orchestrator | 2026-04-02 01:05:09 | INFO  | Task 4801e223-5028-488b-8554-7e3faac10e43 is in state STARTED 2026-04-02 01:05:09.163385 | orchestrator | 2026-04-02 01:05:09 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:05:12.198829 | orchestrator | 2026-04-02 01:05:12 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:05:12.202205 | orchestrator | 2026-04-02 01:05:12 | INFO  | Task c04f0d16-42fb-4c05-b604-3ab7be80e289 is in state STARTED 2026-04-02 01:05:12.203903 | orchestrator | 2026-04-02 01:05:12 | INFO  | Task b1ded223-f030-4800-bdf3-9ef9ec9b88fd is in state STARTED 2026-04-02 01:05:12.205322 | orchestrator | 2026-04-02 01:05:12 | INFO  | Task 4801e223-5028-488b-8554-7e3faac10e43 is in state STARTED 2026-04-02 01:05:12.205365 | orchestrator | 2026-04-02 01:05:12 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:05:15.264367 | orchestrator | 2026-04-02 01:05:15 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:05:15.265595 | orchestrator | 2026-04-02 01:05:15 | INFO  | Task c04f0d16-42fb-4c05-b604-3ab7be80e289 is in state STARTED 2026-04-02 01:05:15.268366 | orchestrator | 2026-04-02 01:05:15 | INFO  | Task b1ded223-f030-4800-bdf3-9ef9ec9b88fd is in state STARTED 2026-04-02 01:05:15.271092 | orchestrator | 2026-04-02 01:05:15 | INFO  | Task 4801e223-5028-488b-8554-7e3faac10e43 is in state STARTED 2026-04-02 01:05:15.271193 | orchestrator | 2026-04-02 01:05:15 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:05:18.315541 | orchestrator | 2026-04-02 01:05:18 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:05:18.318187 | orchestrator | 2026-04-02 01:05:18 | INFO  | Task c04f0d16-42fb-4c05-b604-3ab7be80e289 is in state STARTED 2026-04-02 01:05:18.319424 | orchestrator | 2026-04-02 01:05:18 | INFO  | Task b1ded223-f030-4800-bdf3-9ef9ec9b88fd is in state STARTED 2026-04-02 01:05:18.320985 | orchestrator | 2026-04-02 01:05:18 | INFO  | Task 4801e223-5028-488b-8554-7e3faac10e43 is in state STARTED 2026-04-02 01:05:18.321012 | orchestrator | 2026-04-02 01:05:18 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:05:21.370881 | orchestrator | 2026-04-02 01:05:21 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:05:21.371440 | orchestrator | 2026-04-02 01:05:21 | INFO  | Task c04f0d16-42fb-4c05-b604-3ab7be80e289 is in state STARTED 2026-04-02 01:05:21.372524 | orchestrator | 2026-04-02 01:05:21 | INFO  | Task b1ded223-f030-4800-bdf3-9ef9ec9b88fd is in state STARTED 2026-04-02 01:05:21.373380 | orchestrator | 2026-04-02 01:05:21 | INFO  | Task 4801e223-5028-488b-8554-7e3faac10e43 is in state STARTED 2026-04-02 01:05:21.373398 | orchestrator | 2026-04-02 01:05:21 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:05:24.423090 | orchestrator | 2026-04-02 01:05:24 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:05:24.424664 | orchestrator | 2026-04-02 01:05:24 | INFO  | Task c04f0d16-42fb-4c05-b604-3ab7be80e289 is in state STARTED 2026-04-02 01:05:24.426169 | orchestrator | 2026-04-02 01:05:24 | INFO  | Task b1ded223-f030-4800-bdf3-9ef9ec9b88fd is in state STARTED 2026-04-02 01:05:24.427184 | orchestrator | 2026-04-02 01:05:24 | INFO  | Task 4801e223-5028-488b-8554-7e3faac10e43 is in state STARTED 2026-04-02 01:05:24.427216 | orchestrator | 2026-04-02 01:05:24 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:05:27.479171 | orchestrator | 2026-04-02 01:05:27 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:05:27.479796 | orchestrator | 2026-04-02 01:05:27 | INFO  | Task c04f0d16-42fb-4c05-b604-3ab7be80e289 is in state STARTED 2026-04-02 01:05:27.480520 | orchestrator | 2026-04-02 01:05:27 | INFO  | Task b1ded223-f030-4800-bdf3-9ef9ec9b88fd is in state STARTED 2026-04-02 01:05:27.481421 | orchestrator | 2026-04-02 01:05:27 | INFO  | Task 4801e223-5028-488b-8554-7e3faac10e43 is in state STARTED 2026-04-02 01:05:27.481442 | orchestrator | 2026-04-02 01:05:27 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:05:30.514709 | orchestrator | 2026-04-02 01:05:30 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:05:30.516803 | orchestrator | 2026-04-02 01:05:30 | INFO  | Task c04f0d16-42fb-4c05-b604-3ab7be80e289 is in state STARTED 2026-04-02 01:05:30.519142 | orchestrator | 2026-04-02 01:05:30 | INFO  | Task b1ded223-f030-4800-bdf3-9ef9ec9b88fd is in state STARTED 2026-04-02 01:05:30.520744 | orchestrator | 2026-04-02 01:05:30 | INFO  | Task 4801e223-5028-488b-8554-7e3faac10e43 is in state STARTED 2026-04-02 01:05:30.520790 | orchestrator | 2026-04-02 01:05:30 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:05:33.564805 | orchestrator | 2026-04-02 01:05:33 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:05:33.566265 | orchestrator | 2026-04-02 01:05:33 | INFO  | Task c04f0d16-42fb-4c05-b604-3ab7be80e289 is in state STARTED 2026-04-02 01:05:33.567879 | orchestrator | 2026-04-02 01:05:33 | INFO  | Task b1ded223-f030-4800-bdf3-9ef9ec9b88fd is in state STARTED 2026-04-02 01:05:33.569245 | orchestrator | 2026-04-02 01:05:33 | INFO  | Task 4801e223-5028-488b-8554-7e3faac10e43 is in state STARTED 2026-04-02 01:05:33.569344 | orchestrator | 2026-04-02 01:05:33 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:05:36.614143 | orchestrator | 2026-04-02 01:05:36 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:05:36.615871 | orchestrator | 2026-04-02 01:05:36 | INFO  | Task c04f0d16-42fb-4c05-b604-3ab7be80e289 is in state STARTED 2026-04-02 01:05:36.617706 | orchestrator | 2026-04-02 01:05:36 | INFO  | Task b1ded223-f030-4800-bdf3-9ef9ec9b88fd is in state STARTED 2026-04-02 01:05:36.619144 | orchestrator | 2026-04-02 01:05:36 | INFO  | Task 4801e223-5028-488b-8554-7e3faac10e43 is in state STARTED 2026-04-02 01:05:36.619238 | orchestrator | 2026-04-02 01:05:36 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:05:39.660574 | orchestrator | 2026-04-02 01:05:39 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:05:39.661416 | orchestrator | 2026-04-02 01:05:39 | INFO  | Task c04f0d16-42fb-4c05-b604-3ab7be80e289 is in state STARTED 2026-04-02 01:05:39.661508 | orchestrator | 2026-04-02 01:05:39 | INFO  | Task b1ded223-f030-4800-bdf3-9ef9ec9b88fd is in state STARTED 2026-04-02 01:05:39.666210 | orchestrator | 2026-04-02 01:05:39 | INFO  | Task 4801e223-5028-488b-8554-7e3faac10e43 is in state STARTED 2026-04-02 01:05:39.666261 | orchestrator | 2026-04-02 01:05:39 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:05:42.703843 | orchestrator | 2026-04-02 01:05:42 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:05:42.704233 | orchestrator | 2026-04-02 01:05:42 | INFO  | Task c04f0d16-42fb-4c05-b604-3ab7be80e289 is in state STARTED 2026-04-02 01:05:42.704889 | orchestrator | 2026-04-02 01:05:42 | INFO  | Task b1ded223-f030-4800-bdf3-9ef9ec9b88fd is in state STARTED 2026-04-02 01:05:42.706321 | orchestrator | 2026-04-02 01:05:42 | INFO  | Task 4801e223-5028-488b-8554-7e3faac10e43 is in state STARTED 2026-04-02 01:05:42.706342 | orchestrator | 2026-04-02 01:05:42 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:05:45.748271 | orchestrator | 2026-04-02 01:05:45 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:05:45.749200 | orchestrator | 2026-04-02 01:05:45 | INFO  | Task c04f0d16-42fb-4c05-b604-3ab7be80e289 is in state STARTED 2026-04-02 01:05:45.749833 | orchestrator | 2026-04-02 01:05:45 | INFO  | Task b1ded223-f030-4800-bdf3-9ef9ec9b88fd is in state STARTED 2026-04-02 01:05:45.750942 | orchestrator | 2026-04-02 01:05:45 | INFO  | Task 4801e223-5028-488b-8554-7e3faac10e43 is in state STARTED 2026-04-02 01:05:45.751244 | orchestrator | 2026-04-02 01:05:45 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:05:48.789575 | orchestrator | 2026-04-02 01:05:48 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:05:48.791728 | orchestrator | 2026-04-02 01:05:48 | INFO  | Task c04f0d16-42fb-4c05-b604-3ab7be80e289 is in state STARTED 2026-04-02 01:05:48.793425 | orchestrator | 2026-04-02 01:05:48 | INFO  | Task b1ded223-f030-4800-bdf3-9ef9ec9b88fd is in state STARTED 2026-04-02 01:05:48.795113 | orchestrator | 2026-04-02 01:05:48 | INFO  | Task 4801e223-5028-488b-8554-7e3faac10e43 is in state STARTED 2026-04-02 01:05:48.795160 | orchestrator | 2026-04-02 01:05:48 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:05:51.835932 | orchestrator | 2026-04-02 01:05:51 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:05:51.838500 | orchestrator | 2026-04-02 01:05:51 | INFO  | Task c04f0d16-42fb-4c05-b604-3ab7be80e289 is in state STARTED 2026-04-02 01:05:51.841181 | orchestrator | 2026-04-02 01:05:51 | INFO  | Task b1ded223-f030-4800-bdf3-9ef9ec9b88fd is in state SUCCESS 2026-04-02 01:05:51.842208 | orchestrator | 2026-04-02 01:05:51.842243 | orchestrator | 2026-04-02 01:05:51.842247 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-02 01:05:51.842251 | orchestrator | 2026-04-02 01:05:51.842255 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-02 01:05:51.842258 | orchestrator | Thursday 02 April 2026 01:04:58 +0000 (0:00:00.192) 0:00:00.192 ******** 2026-04-02 01:05:51.842262 | orchestrator | ok: [testbed-node-0] 2026-04-02 01:05:51.842266 | orchestrator | ok: [testbed-node-1] 2026-04-02 01:05:51.842269 | orchestrator | ok: [testbed-node-2] 2026-04-02 01:05:51.842272 | orchestrator | 2026-04-02 01:05:51.842288 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-02 01:05:51.842291 | orchestrator | Thursday 02 April 2026 01:04:58 +0000 (0:00:00.365) 0:00:00.557 ******** 2026-04-02 01:05:51.842294 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2026-04-02 01:05:51.842298 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2026-04-02 01:05:51.842301 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2026-04-02 01:05:51.842304 | orchestrator | 2026-04-02 01:05:51.842307 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2026-04-02 01:05:51.842310 | orchestrator | 2026-04-02 01:05:51.842314 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2026-04-02 01:05:51.842317 | orchestrator | Thursday 02 April 2026 01:04:58 +0000 (0:00:00.490) 0:00:01.048 ******** 2026-04-02 01:05:51.842320 | orchestrator | ok: [testbed-node-2] 2026-04-02 01:05:51.842323 | orchestrator | ok: [testbed-node-1] 2026-04-02 01:05:51.842326 | orchestrator | ok: [testbed-node-0] 2026-04-02 01:05:51.842329 | orchestrator | 2026-04-02 01:05:51.842332 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-02 01:05:51.842336 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-02 01:05:51.842340 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-02 01:05:51.842344 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-02 01:05:51.842347 | orchestrator | 2026-04-02 01:05:51.842350 | orchestrator | 2026-04-02 01:05:51.842353 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-02 01:05:51.842356 | orchestrator | Thursday 02 April 2026 01:05:00 +0000 (0:00:01.393) 0:00:02.442 ******** 2026-04-02 01:05:51.842366 | orchestrator | =============================================================================== 2026-04-02 01:05:51.842369 | orchestrator | Waiting for Nova public port to be UP ----------------------------------- 1.39s 2026-04-02 01:05:51.842372 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.49s 2026-04-02 01:05:51.842375 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.37s 2026-04-02 01:05:51.842378 | orchestrator | 2026-04-02 01:05:51.842382 | orchestrator | 2026-04-02 01:05:51.842385 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-02 01:05:51.842388 | orchestrator | 2026-04-02 01:05:51.842391 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-02 01:05:51.842394 | orchestrator | Thursday 02 April 2026 01:04:10 +0000 (0:00:00.266) 0:00:00.266 ******** 2026-04-02 01:05:51.842397 | orchestrator | ok: [testbed-node-0] 2026-04-02 01:05:51.842400 | orchestrator | ok: [testbed-node-1] 2026-04-02 01:05:51.842403 | orchestrator | ok: [testbed-node-2] 2026-04-02 01:05:51.842407 | orchestrator | 2026-04-02 01:05:51.842410 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-02 01:05:51.842413 | orchestrator | Thursday 02 April 2026 01:04:10 +0000 (0:00:00.610) 0:00:00.876 ******** 2026-04-02 01:05:51.842416 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-04-02 01:05:51.842419 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-04-02 01:05:51.842437 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-04-02 01:05:51.842442 | orchestrator | 2026-04-02 01:05:51.842448 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-04-02 01:05:51.842453 | orchestrator | 2026-04-02 01:05:51.842459 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-04-02 01:05:51.842465 | orchestrator | Thursday 02 April 2026 01:04:11 +0000 (0:00:00.295) 0:00:01.172 ******** 2026-04-02 01:05:51.842468 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 01:05:51.842474 | orchestrator | 2026-04-02 01:05:51.842477 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2026-04-02 01:05:51.842481 | orchestrator | Thursday 02 April 2026 01:04:11 +0000 (0:00:00.669) 0:00:01.841 ******** 2026-04-02 01:05:51.842484 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2026-04-02 01:05:51.842487 | orchestrator | 2026-04-02 01:05:51.842491 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2026-04-02 01:05:51.842494 | orchestrator | Thursday 02 April 2026 01:04:15 +0000 (0:00:03.414) 0:00:05.256 ******** 2026-04-02 01:05:51.842497 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2026-04-02 01:05:51.842500 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2026-04-02 01:05:51.842503 | orchestrator | 2026-04-02 01:05:51.842506 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2026-04-02 01:05:51.842509 | orchestrator | Thursday 02 April 2026 01:04:21 +0000 (0:00:06.207) 0:00:11.464 ******** 2026-04-02 01:05:51.842512 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-02 01:05:51.842516 | orchestrator | 2026-04-02 01:05:51.842519 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2026-04-02 01:05:51.842522 | orchestrator | Thursday 02 April 2026 01:04:24 +0000 (0:00:03.068) 0:00:14.533 ******** 2026-04-02 01:05:51.842531 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2026-04-02 01:05:51.842535 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-02 01:05:51.842538 | orchestrator | 2026-04-02 01:05:51.842541 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2026-04-02 01:05:51.842544 | orchestrator | Thursday 02 April 2026 01:04:27 +0000 (0:00:03.307) 0:00:17.840 ******** 2026-04-02 01:05:51.842547 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-02 01:05:51.842550 | orchestrator | 2026-04-02 01:05:51.842553 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2026-04-02 01:05:51.842556 | orchestrator | Thursday 02 April 2026 01:04:30 +0000 (0:00:02.867) 0:00:20.707 ******** 2026-04-02 01:05:51.842560 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2026-04-02 01:05:51.842563 | orchestrator | 2026-04-02 01:05:51.842566 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2026-04-02 01:05:51.842569 | orchestrator | Thursday 02 April 2026 01:04:33 +0000 (0:00:03.266) 0:00:23.974 ******** 2026-04-02 01:05:51.842572 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:05:51.842623 | orchestrator | 2026-04-02 01:05:51.842631 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2026-04-02 01:05:51.842636 | orchestrator | Thursday 02 April 2026 01:04:36 +0000 (0:00:02.877) 0:00:26.852 ******** 2026-04-02 01:05:51.842641 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:05:51.842646 | orchestrator | 2026-04-02 01:05:51.842651 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2026-04-02 01:05:51.842656 | orchestrator | Thursday 02 April 2026 01:04:40 +0000 (0:00:03.572) 0:00:30.425 ******** 2026-04-02 01:05:51.842661 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:05:51.842666 | orchestrator | 2026-04-02 01:05:51.842817 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-04-02 01:05:51.842825 | orchestrator | Thursday 02 April 2026 01:04:43 +0000 (0:00:03.126) 0:00:33.552 ******** 2026-04-02 01:05:51.842869 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-02 01:05:51.842884 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-02 01:05:51.842890 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-02 01:05:51.842902 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-02 01:05:51.842907 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-02 01:05:51.842916 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-02 01:05:51.842924 | orchestrator | 2026-04-02 01:05:51.842930 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-04-02 01:05:51.842935 | orchestrator | Thursday 02 April 2026 01:04:46 +0000 (0:00:02.775) 0:00:36.327 ******** 2026-04-02 01:05:51.842940 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:05:51.842946 | orchestrator | 2026-04-02 01:05:51.842951 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-04-02 01:05:51.842956 | orchestrator | Thursday 02 April 2026 01:04:46 +0000 (0:00:00.250) 0:00:36.578 ******** 2026-04-02 01:05:51.842961 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:05:51.842967 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:05:51.842971 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:05:51.842974 | orchestrator | 2026-04-02 01:05:51.842977 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-04-02 01:05:51.842980 | orchestrator | Thursday 02 April 2026 01:04:47 +0000 (0:00:00.569) 0:00:37.147 ******** 2026-04-02 01:05:51.842983 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-02 01:05:51.842986 | orchestrator | 2026-04-02 01:05:51.842989 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-04-02 01:05:51.842992 | orchestrator | Thursday 02 April 2026 01:04:48 +0000 (0:00:01.106) 0:00:38.254 ******** 2026-04-02 01:05:51.842996 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-02 01:05:51.843005 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-02 01:05:51.843008 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-02 01:05:51.843017 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-02 01:05:51.843021 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-02 01:05:51.843024 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-02 01:05:51.843028 | orchestrator | 2026-04-02 01:05:51.843031 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-04-02 01:05:51.843034 | orchestrator | Thursday 02 April 2026 01:04:50 +0000 (0:00:02.775) 0:00:41.030 ******** 2026-04-02 01:05:51.843037 | orchestrator | ok: [testbed-node-0] 2026-04-02 01:05:51.843040 | orchestrator | ok: [testbed-node-1] 2026-04-02 01:05:51.843043 | orchestrator | ok: [testbed-node-2] 2026-04-02 01:05:51.843047 | orchestrator | 2026-04-02 01:05:51.843050 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-04-02 01:05:51.843055 | orchestrator | Thursday 02 April 2026 01:04:51 +0000 (0:00:00.517) 0:00:41.547 ******** 2026-04-02 01:05:51.843058 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 01:05:51.843061 | orchestrator | 2026-04-02 01:05:51.843064 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-04-02 01:05:51.843067 | orchestrator | Thursday 02 April 2026 01:04:52 +0000 (0:00:00.694) 0:00:42.242 ******** 2026-04-02 01:05:51.843071 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-02 01:05:51.843079 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-02 01:05:51.843082 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-02 01:05:51.843086 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-02 01:05:51.843092 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-02 01:05:51.843096 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-02 01:05:51.843101 | orchestrator | 2026-04-02 01:05:51.843104 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-04-02 01:05:51.843108 | orchestrator | Thursday 02 April 2026 01:04:54 +0000 (0:00:02.385) 0:00:44.627 ******** 2026-04-02 01:05:51.843113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-02 01:05:51.843116 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-02 01:05:51.843119 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:05:51.843123 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-02 01:05:51.843128 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-02 01:05:51.843135 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:05:51.843138 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-02 01:05:51.843145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-02 01:05:51.843149 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:05:51.843152 | orchestrator | 2026-04-02 01:05:51.843155 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-04-02 01:05:51.843158 | orchestrator | Thursday 02 April 2026 01:04:55 +0000 (0:00:01.245) 0:00:45.873 ******** 2026-04-02 01:05:51.843162 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-02 01:05:51.843165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-02 01:05:51.843168 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:05:51.843177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-02 01:05:51.843180 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-02 01:05:51.843184 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:05:51.843189 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-02 01:05:51.843192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-02 01:05:51.843195 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:05:51.843198 | orchestrator | 2026-04-02 01:05:51.843202 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-04-02 01:05:51.843205 | orchestrator | Thursday 02 April 2026 01:04:56 +0000 (0:00:00.937) 0:00:46.810 ******** 2026-04-02 01:05:51.843210 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-02 01:05:51.843216 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-02 01:05:51.843221 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-02 01:05:51.843225 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-02 01:05:51.843228 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-02 01:05:51.843233 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-02 01:05:51.843240 | orchestrator | 2026-04-02 01:05:51.843243 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-04-02 01:05:51.843246 | orchestrator | Thursday 02 April 2026 01:04:58 +0000 (0:00:02.154) 0:00:48.965 ******** 2026-04-02 01:05:51.843249 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-02 01:05:51.843254 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-02 01:05:51.843258 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-02 01:05:51.843261 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-02 01:05:51.843270 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-02 01:05:51.843274 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-02 01:05:51.843277 | orchestrator | 2026-04-02 01:05:51.843280 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-04-02 01:05:51.843283 | orchestrator | Thursday 02 April 2026 01:05:05 +0000 (0:00:06.948) 0:00:55.913 ******** 2026-04-02 01:05:51.843289 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-02 01:05:51.843292 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-02 01:05:51.843295 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:05:51.843299 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-02 01:05:51.843307 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-02 01:05:51.843310 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:05:51.843313 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-02 01:05:51.843318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-02 01:05:51.843322 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:05:51.843325 | orchestrator | 2026-04-02 01:05:51.843328 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2026-04-02 01:05:51.843331 | orchestrator | Thursday 02 April 2026 01:05:06 +0000 (0:00:00.859) 0:00:56.775 ******** 2026-04-02 01:05:51.843334 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-02 01:05:51.843342 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-02 01:05:51.843345 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-02 01:05:51.843351 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-02 01:05:51.843354 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-02 01:05:51.843357 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-02 01:05:51.843363 | orchestrator | 2026-04-02 01:05:51.843366 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-04-02 01:05:51.843370 | orchestrator | Thursday 02 April 2026 01:05:08 +0000 (0:00:02.072) 0:00:58.848 ******** 2026-04-02 01:05:51.843373 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:05:51.843376 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:05:51.843379 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:05:51.843382 | orchestrator | 2026-04-02 01:05:51.843385 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2026-04-02 01:05:51.843388 | orchestrator | Thursday 02 April 2026 01:05:08 +0000 (0:00:00.202) 0:00:59.050 ******** 2026-04-02 01:05:51.843392 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:05:51.843395 | orchestrator | 2026-04-02 01:05:51.843398 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2026-04-02 01:05:51.843401 | orchestrator | Thursday 02 April 2026 01:05:11 +0000 (0:00:02.143) 0:01:01.194 ******** 2026-04-02 01:05:51.843404 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:05:51.843407 | orchestrator | 2026-04-02 01:05:51.843410 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-04-02 01:05:51.843413 | orchestrator | Thursday 02 April 2026 01:05:13 +0000 (0:00:02.086) 0:01:03.281 ******** 2026-04-02 01:05:51.843418 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:05:51.843466 | orchestrator | 2026-04-02 01:05:51.843471 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-04-02 01:05:51.843474 | orchestrator | Thursday 02 April 2026 01:05:27 +0000 (0:00:14.000) 0:01:17.282 ******** 2026-04-02 01:05:51.843477 | orchestrator | 2026-04-02 01:05:51.843480 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-04-02 01:05:51.843484 | orchestrator | Thursday 02 April 2026 01:05:27 +0000 (0:00:00.206) 0:01:17.489 ******** 2026-04-02 01:05:51.843487 | orchestrator | 2026-04-02 01:05:51.843490 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-04-02 01:05:51.843493 | orchestrator | Thursday 02 April 2026 01:05:27 +0000 (0:00:00.062) 0:01:17.551 ******** 2026-04-02 01:05:51.843496 | orchestrator | 2026-04-02 01:05:51.843500 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-04-02 01:05:51.843503 | orchestrator | Thursday 02 April 2026 01:05:27 +0000 (0:00:00.071) 0:01:17.623 ******** 2026-04-02 01:05:51.843506 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:05:51.843510 | orchestrator | changed: [testbed-node-2] 2026-04-02 01:05:51.843514 | orchestrator | changed: [testbed-node-1] 2026-04-02 01:05:51.843517 | orchestrator | 2026-04-02 01:05:51.843521 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-04-02 01:05:51.843525 | orchestrator | Thursday 02 April 2026 01:05:39 +0000 (0:00:12.413) 0:01:30.036 ******** 2026-04-02 01:05:51.843529 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:05:51.843533 | orchestrator | changed: [testbed-node-1] 2026-04-02 01:05:51.843536 | orchestrator | changed: [testbed-node-2] 2026-04-02 01:05:51.843540 | orchestrator | 2026-04-02 01:05:51.843544 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-02 01:05:51.843549 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-02 01:05:51.843553 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-02 01:05:51.843557 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-02 01:05:51.843564 | orchestrator | 2026-04-02 01:05:51.843568 | orchestrator | 2026-04-02 01:05:51.843572 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-02 01:05:51.843580 | orchestrator | Thursday 02 April 2026 01:05:50 +0000 (0:00:10.928) 0:01:40.964 ******** 2026-04-02 01:05:51.843584 | orchestrator | =============================================================================== 2026-04-02 01:05:51.843588 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 14.00s 2026-04-02 01:05:51.843592 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 12.41s 2026-04-02 01:05:51.843596 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 10.93s 2026-04-02 01:05:51.843600 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 6.95s 2026-04-02 01:05:51.843604 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.21s 2026-04-02 01:05:51.843608 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 3.57s 2026-04-02 01:05:51.843612 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.41s 2026-04-02 01:05:51.843616 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.31s 2026-04-02 01:05:51.843620 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 3.27s 2026-04-02 01:05:51.843624 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.13s 2026-04-02 01:05:51.843628 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.07s 2026-04-02 01:05:51.843633 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 2.88s 2026-04-02 01:05:51.843636 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 2.87s 2026-04-02 01:05:51.843641 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 2.78s 2026-04-02 01:05:51.843645 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.77s 2026-04-02 01:05:51.843649 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.39s 2026-04-02 01:05:51.843652 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.15s 2026-04-02 01:05:51.843656 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.14s 2026-04-02 01:05:51.843660 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.09s 2026-04-02 01:05:51.843664 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.07s 2026-04-02 01:05:51.843668 | orchestrator | 2026-04-02 01:05:51 | INFO  | Task 4801e223-5028-488b-8554-7e3faac10e43 is in state STARTED 2026-04-02 01:05:51.843672 | orchestrator | 2026-04-02 01:05:51 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:05:54.899462 | orchestrator | 2026-04-02 01:05:54 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:05:54.901267 | orchestrator | 2026-04-02 01:05:54 | INFO  | Task c04f0d16-42fb-4c05-b604-3ab7be80e289 is in state STARTED 2026-04-02 01:05:54.903106 | orchestrator | 2026-04-02 01:05:54 | INFO  | Task 4801e223-5028-488b-8554-7e3faac10e43 is in state STARTED 2026-04-02 01:05:54.903162 | orchestrator | 2026-04-02 01:05:54 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:05:57.944371 | orchestrator | 2026-04-02 01:05:57 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:05:57.944672 | orchestrator | 2026-04-02 01:05:57 | INFO  | Task c04f0d16-42fb-4c05-b604-3ab7be80e289 is in state STARTED 2026-04-02 01:05:57.945300 | orchestrator | 2026-04-02 01:05:57 | INFO  | Task 4801e223-5028-488b-8554-7e3faac10e43 is in state STARTED 2026-04-02 01:05:57.945993 | orchestrator | 2026-04-02 01:05:57 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:06:00.969773 | orchestrator | 2026-04-02 01:06:00 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:06:00.971810 | orchestrator | 2026-04-02 01:06:00 | INFO  | Task c04f0d16-42fb-4c05-b604-3ab7be80e289 is in state STARTED 2026-04-02 01:06:00.974756 | orchestrator | 2026-04-02 01:06:00 | INFO  | Task 4801e223-5028-488b-8554-7e3faac10e43 is in state STARTED 2026-04-02 01:06:00.974813 | orchestrator | 2026-04-02 01:06:00 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:06:04.017581 | orchestrator | 2026-04-02 01:06:04 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:06:04.019539 | orchestrator | 2026-04-02 01:06:04 | INFO  | Task c04f0d16-42fb-4c05-b604-3ab7be80e289 is in state STARTED 2026-04-02 01:06:04.022119 | orchestrator | 2026-04-02 01:06:04 | INFO  | Task 4801e223-5028-488b-8554-7e3faac10e43 is in state STARTED 2026-04-02 01:06:04.022167 | orchestrator | 2026-04-02 01:06:04 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:06:07.065821 | orchestrator | 2026-04-02 01:06:07 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:06:07.067606 | orchestrator | 2026-04-02 01:06:07 | INFO  | Task c04f0d16-42fb-4c05-b604-3ab7be80e289 is in state STARTED 2026-04-02 01:06:07.069163 | orchestrator | 2026-04-02 01:06:07 | INFO  | Task 4801e223-5028-488b-8554-7e3faac10e43 is in state STARTED 2026-04-02 01:06:07.069207 | orchestrator | 2026-04-02 01:06:07 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:06:10.110135 | orchestrator | 2026-04-02 01:06:10 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:06:10.111991 | orchestrator | 2026-04-02 01:06:10 | INFO  | Task c04f0d16-42fb-4c05-b604-3ab7be80e289 is in state STARTED 2026-04-02 01:06:10.112827 | orchestrator | 2026-04-02 01:06:10 | INFO  | Task 4801e223-5028-488b-8554-7e3faac10e43 is in state STARTED 2026-04-02 01:06:10.112863 | orchestrator | 2026-04-02 01:06:10 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:06:13.158008 | orchestrator | 2026-04-02 01:06:13 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:06:13.159787 | orchestrator | 2026-04-02 01:06:13 | INFO  | Task c04f0d16-42fb-4c05-b604-3ab7be80e289 is in state STARTED 2026-04-02 01:06:13.163535 | orchestrator | 2026-04-02 01:06:13 | INFO  | Task 4801e223-5028-488b-8554-7e3faac10e43 is in state STARTED 2026-04-02 01:06:13.163612 | orchestrator | 2026-04-02 01:06:13 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:06:16.207455 | orchestrator | 2026-04-02 01:06:16 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:06:16.209206 | orchestrator | 2026-04-02 01:06:16 | INFO  | Task c04f0d16-42fb-4c05-b604-3ab7be80e289 is in state STARTED 2026-04-02 01:06:16.210824 | orchestrator | 2026-04-02 01:06:16 | INFO  | Task 4801e223-5028-488b-8554-7e3faac10e43 is in state STARTED 2026-04-02 01:06:16.211240 | orchestrator | 2026-04-02 01:06:16 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:06:19.252080 | orchestrator | 2026-04-02 01:06:19 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:06:19.252559 | orchestrator | 2026-04-02 01:06:19 | INFO  | Task c04f0d16-42fb-4c05-b604-3ab7be80e289 is in state STARTED 2026-04-02 01:06:19.254145 | orchestrator | 2026-04-02 01:06:19 | INFO  | Task 4801e223-5028-488b-8554-7e3faac10e43 is in state STARTED 2026-04-02 01:06:19.254180 | orchestrator | 2026-04-02 01:06:19 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:06:22.290173 | orchestrator | 2026-04-02 01:06:22 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:06:22.290991 | orchestrator | 2026-04-02 01:06:22 | INFO  | Task c04f0d16-42fb-4c05-b604-3ab7be80e289 is in state STARTED 2026-04-02 01:06:22.296724 | orchestrator | 2026-04-02 01:06:22 | INFO  | Task 4801e223-5028-488b-8554-7e3faac10e43 is in state STARTED 2026-04-02 01:06:22.296768 | orchestrator | 2026-04-02 01:06:22 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:06:25.320508 | orchestrator | 2026-04-02 01:06:25 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:06:25.320864 | orchestrator | 2026-04-02 01:06:25 | INFO  | Task c04f0d16-42fb-4c05-b604-3ab7be80e289 is in state STARTED 2026-04-02 01:06:25.321666 | orchestrator | 2026-04-02 01:06:25 | INFO  | Task 4801e223-5028-488b-8554-7e3faac10e43 is in state STARTED 2026-04-02 01:06:25.321706 | orchestrator | 2026-04-02 01:06:25 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:06:28.346532 | orchestrator | 2026-04-02 01:06:28 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:06:28.347055 | orchestrator | 2026-04-02 01:06:28 | INFO  | Task c04f0d16-42fb-4c05-b604-3ab7be80e289 is in state STARTED 2026-04-02 01:06:28.347542 | orchestrator | 2026-04-02 01:06:28 | INFO  | Task 4801e223-5028-488b-8554-7e3faac10e43 is in state STARTED 2026-04-02 01:06:28.347577 | orchestrator | 2026-04-02 01:06:28 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:06:31.398314 | orchestrator | 2026-04-02 01:06:31 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:06:31.399497 | orchestrator | 2026-04-02 01:06:31 | INFO  | Task c04f0d16-42fb-4c05-b604-3ab7be80e289 is in state STARTED 2026-04-02 01:06:31.401037 | orchestrator | 2026-04-02 01:06:31 | INFO  | Task 4801e223-5028-488b-8554-7e3faac10e43 is in state STARTED 2026-04-02 01:06:31.401102 | orchestrator | 2026-04-02 01:06:31 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:06:34.445652 | orchestrator | 2026-04-02 01:06:34 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:06:34.447637 | orchestrator | 2026-04-02 01:06:34 | INFO  | Task c04f0d16-42fb-4c05-b604-3ab7be80e289 is in state STARTED 2026-04-02 01:06:34.449116 | orchestrator | 2026-04-02 01:06:34 | INFO  | Task 4801e223-5028-488b-8554-7e3faac10e43 is in state STARTED 2026-04-02 01:06:34.449156 | orchestrator | 2026-04-02 01:06:34 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:06:37.493722 | orchestrator | 2026-04-02 01:06:37 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:06:37.493900 | orchestrator | 2026-04-02 01:06:37 | INFO  | Task c04f0d16-42fb-4c05-b604-3ab7be80e289 is in state STARTED 2026-04-02 01:06:37.495003 | orchestrator | 2026-04-02 01:06:37 | INFO  | Task 4801e223-5028-488b-8554-7e3faac10e43 is in state STARTED 2026-04-02 01:06:37.495021 | orchestrator | 2026-04-02 01:06:37 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:06:40.542327 | orchestrator | 2026-04-02 01:06:40 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:06:40.544402 | orchestrator | 2026-04-02 01:06:40 | INFO  | Task c04f0d16-42fb-4c05-b604-3ab7be80e289 is in state STARTED 2026-04-02 01:06:40.546319 | orchestrator | 2026-04-02 01:06:40 | INFO  | Task 4801e223-5028-488b-8554-7e3faac10e43 is in state STARTED 2026-04-02 01:06:40.546379 | orchestrator | 2026-04-02 01:06:40 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:06:43.591143 | orchestrator | 2026-04-02 01:06:43 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:06:43.593216 | orchestrator | 2026-04-02 01:06:43 | INFO  | Task c04f0d16-42fb-4c05-b604-3ab7be80e289 is in state STARTED 2026-04-02 01:06:43.594940 | orchestrator | 2026-04-02 01:06:43 | INFO  | Task 4801e223-5028-488b-8554-7e3faac10e43 is in state STARTED 2026-04-02 01:06:43.594980 | orchestrator | 2026-04-02 01:06:43 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:06:46.647818 | orchestrator | 2026-04-02 01:06:46 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:06:46.648990 | orchestrator | 2026-04-02 01:06:46 | INFO  | Task c04f0d16-42fb-4c05-b604-3ab7be80e289 is in state STARTED 2026-04-02 01:06:46.651602 | orchestrator | 2026-04-02 01:06:46 | INFO  | Task 4801e223-5028-488b-8554-7e3faac10e43 is in state STARTED 2026-04-02 01:06:46.651650 | orchestrator | 2026-04-02 01:06:46 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:06:49.699688 | orchestrator | 2026-04-02 01:06:49 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:06:49.700750 | orchestrator | 2026-04-02 01:06:49 | INFO  | Task c04f0d16-42fb-4c05-b604-3ab7be80e289 is in state STARTED 2026-04-02 01:06:49.702621 | orchestrator | 2026-04-02 01:06:49 | INFO  | Task 4801e223-5028-488b-8554-7e3faac10e43 is in state STARTED 2026-04-02 01:06:49.702670 | orchestrator | 2026-04-02 01:06:49 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:06:52.742445 | orchestrator | 2026-04-02 01:06:52 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:06:52.747935 | orchestrator | 2026-04-02 01:06:52 | INFO  | Task c04f0d16-42fb-4c05-b604-3ab7be80e289 is in state SUCCESS 2026-04-02 01:06:52.749610 | orchestrator | 2026-04-02 01:06:52.749663 | orchestrator | 2026-04-02 01:06:52.749669 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-02 01:06:52.749673 | orchestrator | 2026-04-02 01:06:52.749676 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-02 01:06:52.749680 | orchestrator | Thursday 02 April 2026 01:04:40 +0000 (0:00:00.302) 0:00:00.302 ******** 2026-04-02 01:06:52.749686 | orchestrator | ok: [testbed-node-0] 2026-04-02 01:06:52.749692 | orchestrator | ok: [testbed-node-1] 2026-04-02 01:06:52.749698 | orchestrator | ok: [testbed-node-2] 2026-04-02 01:06:52.749703 | orchestrator | 2026-04-02 01:06:52.749709 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-02 01:06:52.749713 | orchestrator | Thursday 02 April 2026 01:04:40 +0000 (0:00:00.282) 0:00:00.585 ******** 2026-04-02 01:06:52.749719 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-04-02 01:06:52.749726 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-04-02 01:06:52.749731 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-04-02 01:06:52.749737 | orchestrator | 2026-04-02 01:06:52.749742 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-04-02 01:06:52.749748 | orchestrator | 2026-04-02 01:06:52.749754 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-04-02 01:06:52.749759 | orchestrator | Thursday 02 April 2026 01:04:40 +0000 (0:00:00.284) 0:00:00.869 ******** 2026-04-02 01:06:52.749765 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 01:06:52.749770 | orchestrator | 2026-04-02 01:06:52.749775 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-04-02 01:06:52.749780 | orchestrator | Thursday 02 April 2026 01:04:41 +0000 (0:00:00.600) 0:00:01.469 ******** 2026-04-02 01:06:52.749797 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-02 01:06:52.749822 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-02 01:06:52.749828 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-02 01:06:52.749834 | orchestrator | 2026-04-02 01:06:52.749839 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-04-02 01:06:52.749844 | orchestrator | Thursday 02 April 2026 01:04:42 +0000 (0:00:00.996) 0:00:02.466 ******** 2026-04-02 01:06:52.749849 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2026-04-02 01:06:52.749854 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2026-04-02 01:06:52.749859 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-02 01:06:52.749864 | orchestrator | 2026-04-02 01:06:52.749868 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-04-02 01:06:52.749873 | orchestrator | Thursday 02 April 2026 01:04:43 +0000 (0:00:00.851) 0:00:03.318 ******** 2026-04-02 01:06:52.749879 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 01:06:52.749884 | orchestrator | 2026-04-02 01:06:52.749889 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-04-02 01:06:52.749894 | orchestrator | Thursday 02 April 2026 01:04:44 +0000 (0:00:00.690) 0:00:04.009 ******** 2026-04-02 01:06:52.749909 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-02 01:06:52.749916 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-02 01:06:52.749931 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-02 01:06:52.749937 | orchestrator | 2026-04-02 01:06:52.749942 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-04-02 01:06:52.749947 | orchestrator | Thursday 02 April 2026 01:04:46 +0000 (0:00:02.173) 0:00:06.182 ******** 2026-04-02 01:06:52.749953 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-02 01:06:52.749959 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:06:52.749964 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-02 01:06:52.749969 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:06:52.750088 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-02 01:06:52.750098 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:06:52.750104 | orchestrator | 2026-04-02 01:06:52.750110 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-04-02 01:06:52.750116 | orchestrator | Thursday 02 April 2026 01:04:46 +0000 (0:00:00.667) 0:00:06.850 ******** 2026-04-02 01:06:52.750122 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-02 01:06:52.750138 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-02 01:06:52.750371 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:06:52.750378 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:06:52.750382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-02 01:06:52.750387 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:06:52.750390 | orchestrator | 2026-04-02 01:06:52.750394 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-04-02 01:06:52.750398 | orchestrator | Thursday 02 April 2026 01:04:47 +0000 (0:00:00.889) 0:00:07.739 ******** 2026-04-02 01:06:52.750402 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-02 01:06:52.750406 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-02 01:06:52.750416 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-02 01:06:52.750428 | orchestrator | 2026-04-02 01:06:52.750433 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-04-02 01:06:52.750439 | orchestrator | Thursday 02 April 2026 01:04:49 +0000 (0:00:01.666) 0:00:09.406 ******** 2026-04-02 01:06:52.750444 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-02 01:06:52.750451 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-02 01:06:52.750455 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-02 01:06:52.750459 | orchestrator | 2026-04-02 01:06:52.750462 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-04-02 01:06:52.750466 | orchestrator | Thursday 02 April 2026 01:04:51 +0000 (0:00:01.570) 0:00:10.976 ******** 2026-04-02 01:06:52.750470 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:06:52.750474 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:06:52.750477 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:06:52.750481 | orchestrator | 2026-04-02 01:06:52.750484 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-04-02 01:06:52.750488 | orchestrator | Thursday 02 April 2026 01:04:51 +0000 (0:00:00.480) 0:00:11.456 ******** 2026-04-02 01:06:52.750492 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-04-02 01:06:52.750496 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-04-02 01:06:52.750499 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-04-02 01:06:52.750503 | orchestrator | 2026-04-02 01:06:52.750506 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-04-02 01:06:52.750510 | orchestrator | Thursday 02 April 2026 01:04:52 +0000 (0:00:01.340) 0:00:12.796 ******** 2026-04-02 01:06:52.750514 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-04-02 01:06:52.750518 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-04-02 01:06:52.750525 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-04-02 01:06:52.750531 | orchestrator | 2026-04-02 01:06:52.750536 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2026-04-02 01:06:52.750567 | orchestrator | Thursday 02 April 2026 01:04:54 +0000 (0:00:01.173) 0:00:13.970 ******** 2026-04-02 01:06:52.750578 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-02 01:06:52.750583 | orchestrator | 2026-04-02 01:06:52.750589 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2026-04-02 01:06:52.750594 | orchestrator | Thursday 02 April 2026 01:04:55 +0000 (0:00:01.149) 0:00:15.120 ******** 2026-04-02 01:06:52.750599 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2026-04-02 01:06:52.750603 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2026-04-02 01:06:52.750607 | orchestrator | ok: [testbed-node-0] 2026-04-02 01:06:52.750611 | orchestrator | ok: [testbed-node-1] 2026-04-02 01:06:52.750615 | orchestrator | ok: [testbed-node-2] 2026-04-02 01:06:52.750619 | orchestrator | 2026-04-02 01:06:52.750622 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2026-04-02 01:06:52.750626 | orchestrator | Thursday 02 April 2026 01:04:55 +0000 (0:00:00.738) 0:00:15.858 ******** 2026-04-02 01:06:52.750630 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:06:52.750677 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:06:52.750684 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:06:52.750689 | orchestrator | 2026-04-02 01:06:52.750695 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-04-02 01:06:52.750700 | orchestrator | Thursday 02 April 2026 01:04:56 +0000 (0:00:00.384) 0:00:16.242 ******** 2026-04-02 01:06:52.750710 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 121701, 'inode': 1094835, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2229595, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.750717 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 121701, 'inode': 1094835, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2229595, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.750724 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 121701, 'inode': 1094835, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2229595, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.750730 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfsdashboard.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfsdashboard.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 143913, 'inode': 1094888, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2288215, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.750862 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfsdashboard.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfsdashboard.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 143913, 'inode': 1094888, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2288215, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.750870 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfsdashboard.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfsdashboard.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 143913, 'inode': 1094888, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2288215, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.750873 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26019, 'inode': 1094943, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2400713, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.750879 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26019, 'inode': 1094943, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2400713, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.750882 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26019, 'inode': 1094943, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2400713, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.750887 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1094881, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2272694, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.750922 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1094881, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2272694, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.750939 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1094881, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2272694, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.750943 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 170293, 'inode': 1094946, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2406728, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.750948 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 170293, 'inode': 1094946, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2406728, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.750952 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 170293, 'inode': 1094946, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2406728, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.750955 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-nvmeof-performance.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof-performance.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 33297, 'inode': 1094856, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2234864, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.750964 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-nvmeof-performance.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof-performance.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 33297, 'inode': 1094856, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2234864, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.750977 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-nvmeof-performance.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof-performance.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 33297, 'inode': 1094856, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2234864, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.750981 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26346, 'inode': 1094908, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2325807, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.750986 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26346, 'inode': 1094908, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2325807, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.750990 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26346, 'inode': 1094908, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2325807, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.750993 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 46110, 'inode': 1094929, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2376032, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.750999 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 46110, 'inode': 1094929, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2376032, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751011 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 46110, 'inode': 1094929, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2376032, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751015 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1094825, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2208493, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751018 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1094825, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2208493, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751023 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1094825, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2208493, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751027 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1094851, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2234864, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751032 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1094851, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2234864, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751035 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1094851, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2234864, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751047 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1094885, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.227649, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751051 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1094885, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.227649, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751057 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1094885, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.227649, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751060 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19231, 'inode': 1094913, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.234544, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751067 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19231, 'inode': 1094913, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.234544, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751071 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19231, 'inode': 1094913, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.234544, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751082 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13320, 'inode': 1094938, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2394748, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751086 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13320, 'inode': 1094938, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2394748, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751089 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13320, 'inode': 1094938, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2394748, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751094 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1094872, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2268698, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751100 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1094872, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2268698, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751103 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1094872, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2268698, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751107 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 20042, 'inode': 1094923, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2365615, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751118 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 20042, 'inode': 1094923, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2365615, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751122 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 20042, 'inode': 1094923, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2365615, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751127 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/smb-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/smb-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29877, 'inode': 1094949, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2406728, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751134 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/smb-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/smb-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29877, 'inode': 1094949, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2406728, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751137 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/smb-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/smb-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29877, 'inode': 1094949, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2406728, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751140 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38375, 'inode': 1094910, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2331483, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751145 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38375, 'inode': 1094910, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2331483, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751149 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38375, 'inode': 1094910, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2331483, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751152 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 63043, 'inode': 1094898, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2325807, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751158 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 63043, 'inode': 1094898, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2325807, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751161 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 63043, 'inode': 1094898, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2325807, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751164 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27387, 'inode': 1094895, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2303379, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751170 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27387, 'inode': 1094895, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2303379, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751174 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27387, 'inode': 1094895, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2303379, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751193 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49016, 'inode': 1094916, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.236186, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751199 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49016, 'inode': 1094916, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.236186, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751207 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49016, 'inode': 1094916, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.236186, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751213 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 43303, 'inode': 1094892, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2296364, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751219 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 43303, 'inode': 1094892, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2296364, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751229 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 43303, 'inode': 1094892, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2296364, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751234 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16614, 'inode': 1094932, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.238292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751243 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16614, 'inode': 1094932, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.238292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751295 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16614, 'inode': 1094932, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.238292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751299 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-nvmeof.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 52667, 'inode': 1094857, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2247398, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751302 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-nvmeof.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 52667, 'inode': 1094857, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2247398, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751309 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-nvmeof.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 52667, 'inode': 1094857, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2247398, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751312 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1095157, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2735114, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751318 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1095157, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2735114, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751323 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1095157, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2735114, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751327 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1095025, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2550173, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751344 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1095025, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2550173, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751351 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1095025, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2550173, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751355 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1094987, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.246405, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751358 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1094987, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.246405, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751367 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1094987, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.246405, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751371 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15767, 'inode': 1095062, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2567053, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751376 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15767, 'inode': 1095062, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2567053, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751381 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15767, 'inode': 1095062, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2567053, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751392 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1094956, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2441857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751399 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1094956, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2441857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751412 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1094956, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2441857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751417 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1095116, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2675345, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751422 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1095116, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2675345, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751427 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1095116, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2675345, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751437 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1095064, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2631874, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751443 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1095064, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2631874, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751454 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1095064, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2631874, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751459 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22303, 'inode': 1095123, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.26768, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751465 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22303, 'inode': 1095123, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.26768, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751469 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22303, 'inode': 1095123, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.26768, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751475 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1095147, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.272603, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751481 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1095147, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.272603, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751486 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1095147, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.272603, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751490 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21194, 'inode': 1095112, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2665367, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751493 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21194, 'inode': 1095112, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2665367, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751496 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21194, 'inode': 1095112, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2665367, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751502 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1095056, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2560444, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751505 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1095056, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2560444, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751512 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1095056, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2560444, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751515 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1095022, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2508643, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751519 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1095022, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2508643, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751522 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1095022, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2508643, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751525 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1095051, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2557774, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751531 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1095051, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2557774, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751537 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1095051, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2557774, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751548 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1094992, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2504737, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751555 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1094992, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2504737, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751561 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1094992, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2504737, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751566 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15957, 'inode': 1095061, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2562697, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751579 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15957, 'inode': 1095061, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2562697, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751584 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15957, 'inode': 1095061, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2562697, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751591 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1095136, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2718482, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751597 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1095136, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2718482, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751603 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1095136, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2718482, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751608 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1095127, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2695227, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751625 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1095127, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2695227, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751629 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1095127, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2695227, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751633 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1094968, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.245582, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751637 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1094968, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.245582, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751640 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1094968, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.245582, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751643 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1094982, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.24616, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751651 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1094982, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.24616, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751657 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1094982, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.24616, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751668 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1095100, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2657611, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751674 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1095100, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2657611, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751679 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1095100, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.2657611, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751684 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21951, 'inode': 1095125, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.268175, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751696 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21951, 'inode': 1095125, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.268175, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751702 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21951, 'inode': 1095125, 'dev': 120, 'nlink': 1, 'atime': 1775088148.0, 'mtime': 1775088148.0, 'ctime': 1775088987.268175, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-02 01:06:52.751707 | orchestrator | 2026-04-02 01:06:52.751713 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2026-04-02 01:06:52.751718 | orchestrator | Thursday 02 April 2026 01:05:33 +0000 (0:00:36.767) 0:00:53.010 ******** 2026-04-02 01:06:52.751727 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-02 01:06:52.751733 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-02 01:06:52.751738 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-02 01:06:52.751742 | orchestrator | 2026-04-02 01:06:52.751745 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-04-02 01:06:52.751748 | orchestrator | Thursday 02 April 2026 01:05:34 +0000 (0:00:01.220) 0:00:54.230 ******** 2026-04-02 01:06:52.751755 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:06:52.751758 | orchestrator | 2026-04-02 01:06:52.751761 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-04-02 01:06:52.751764 | orchestrator | Thursday 02 April 2026 01:05:37 +0000 (0:00:03.037) 0:00:57.268 ******** 2026-04-02 01:06:52.751767 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:06:52.751771 | orchestrator | 2026-04-02 01:06:52.751774 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-04-02 01:06:52.751777 | orchestrator | Thursday 02 April 2026 01:05:39 +0000 (0:00:02.247) 0:00:59.515 ******** 2026-04-02 01:06:52.751780 | orchestrator | 2026-04-02 01:06:52.751783 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-04-02 01:06:52.751786 | orchestrator | Thursday 02 April 2026 01:05:39 +0000 (0:00:00.067) 0:00:59.583 ******** 2026-04-02 01:06:52.751789 | orchestrator | 2026-04-02 01:06:52.751792 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-04-02 01:06:52.751796 | orchestrator | Thursday 02 April 2026 01:05:39 +0000 (0:00:00.066) 0:00:59.650 ******** 2026-04-02 01:06:52.751799 | orchestrator | 2026-04-02 01:06:52.751802 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-04-02 01:06:52.751805 | orchestrator | Thursday 02 April 2026 01:05:39 +0000 (0:00:00.097) 0:00:59.748 ******** 2026-04-02 01:06:52.751808 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:06:52.751814 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:06:52.751817 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:06:52.751821 | orchestrator | 2026-04-02 01:06:52.751824 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-04-02 01:06:52.751827 | orchestrator | Thursday 02 April 2026 01:05:42 +0000 (0:00:02.220) 0:01:01.969 ******** 2026-04-02 01:06:52.751830 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:06:52.751833 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:06:52.751836 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-04-02 01:06:52.751840 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-04-02 01:06:52.751843 | orchestrator | ok: [testbed-node-0] 2026-04-02 01:06:52.751846 | orchestrator | 2026-04-02 01:06:52.751850 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-04-02 01:06:52.751853 | orchestrator | Thursday 02 April 2026 01:06:08 +0000 (0:00:26.034) 0:01:28.003 ******** 2026-04-02 01:06:52.751856 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:06:52.751859 | orchestrator | changed: [testbed-node-2] 2026-04-02 01:06:52.751862 | orchestrator | changed: [testbed-node-1] 2026-04-02 01:06:52.751865 | orchestrator | 2026-04-02 01:06:52.751868 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-04-02 01:06:52.751872 | orchestrator | Thursday 02 April 2026 01:06:45 +0000 (0:00:37.916) 0:02:05.919 ******** 2026-04-02 01:06:52.751875 | orchestrator | ok: [testbed-node-0] 2026-04-02 01:06:52.751878 | orchestrator | 2026-04-02 01:06:52.751881 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-04-02 01:06:52.751884 | orchestrator | Thursday 02 April 2026 01:06:48 +0000 (0:00:02.638) 0:02:08.558 ******** 2026-04-02 01:06:52.751887 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:06:52.751890 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:06:52.751894 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:06:52.751897 | orchestrator | 2026-04-02 01:06:52.751900 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-04-02 01:06:52.751905 | orchestrator | Thursday 02 April 2026 01:06:48 +0000 (0:00:00.250) 0:02:08.809 ******** 2026-04-02 01:06:52.751908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-04-02 01:06:52.751916 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-04-02 01:06:52.751919 | orchestrator | 2026-04-02 01:06:52.751923 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-04-02 01:06:52.751926 | orchestrator | Thursday 02 April 2026 01:06:51 +0000 (0:00:03.090) 0:02:11.900 ******** 2026-04-02 01:06:52.751929 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:06:52.751933 | orchestrator | 2026-04-02 01:06:52.751936 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-02 01:06:52.751941 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-02 01:06:52.751945 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-02 01:06:52.751949 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-02 01:06:52.751953 | orchestrator | 2026-04-02 01:06:52.751957 | orchestrator | 2026-04-02 01:06:52.751961 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-02 01:06:52.751964 | orchestrator | Thursday 02 April 2026 01:06:52 +0000 (0:00:00.240) 0:02:12.141 ******** 2026-04-02 01:06:52.751968 | orchestrator | =============================================================================== 2026-04-02 01:06:52.751972 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 37.92s 2026-04-02 01:06:52.751976 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 36.77s 2026-04-02 01:06:52.751979 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 26.03s 2026-04-02 01:06:52.751983 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 3.09s 2026-04-02 01:06:52.751987 | orchestrator | grafana : Creating grafana database ------------------------------------- 3.04s 2026-04-02 01:06:52.751990 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.64s 2026-04-02 01:06:52.751994 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.25s 2026-04-02 01:06:52.751998 | orchestrator | grafana : Restart first grafana container ------------------------------- 2.22s 2026-04-02 01:06:52.752001 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 2.17s 2026-04-02 01:06:52.752012 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.67s 2026-04-02 01:06:52.752020 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.57s 2026-04-02 01:06:52.752023 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.34s 2026-04-02 01:06:52.752029 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.22s 2026-04-02 01:06:52.752033 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.17s 2026-04-02 01:06:52.752037 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 1.15s 2026-04-02 01:06:52.752041 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 1.00s 2026-04-02 01:06:52.752044 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.89s 2026-04-02 01:06:52.752048 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.85s 2026-04-02 01:06:52.752052 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.74s 2026-04-02 01:06:52.752056 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.69s 2026-04-02 01:06:52.752060 | orchestrator | 2026-04-02 01:06:52 | INFO  | Task 4801e223-5028-488b-8554-7e3faac10e43 is in state STARTED 2026-04-02 01:06:52.752067 | orchestrator | 2026-04-02 01:06:52 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:06:55.793769 | orchestrator | 2026-04-02 01:06:55 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:06:55.795887 | orchestrator | 2026-04-02 01:06:55 | INFO  | Task 4801e223-5028-488b-8554-7e3faac10e43 is in state STARTED 2026-04-02 01:06:55.795933 | orchestrator | 2026-04-02 01:06:55 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:06:58.836164 | orchestrator | 2026-04-02 01:06:58 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:06:58.837291 | orchestrator | 2026-04-02 01:06:58 | INFO  | Task 4801e223-5028-488b-8554-7e3faac10e43 is in state STARTED 2026-04-02 01:06:58.837568 | orchestrator | 2026-04-02 01:06:58 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:07:01.879107 | orchestrator | 2026-04-02 01:07:01 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:07:01.881419 | orchestrator | 2026-04-02 01:07:01 | INFO  | Task 4801e223-5028-488b-8554-7e3faac10e43 is in state STARTED 2026-04-02 01:07:01.881476 | orchestrator | 2026-04-02 01:07:01 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:07:04.923764 | orchestrator | 2026-04-02 01:07:04 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state STARTED 2026-04-02 01:07:04.925060 | orchestrator | 2026-04-02 01:07:04 | INFO  | Task 4801e223-5028-488b-8554-7e3faac10e43 is in state STARTED 2026-04-02 01:07:04.925607 | orchestrator | 2026-04-02 01:07:04 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:07:07.968498 | orchestrator | 2026-04-02 01:07:07 | INFO  | Task e102d1db-b625-4974-88d7-9bd6b0c61e43 is in state SUCCESS 2026-04-02 01:07:07.969588 | orchestrator | 2026-04-02 01:07:07.969632 | orchestrator | 2026-04-02 01:07:07.969639 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-02 01:07:07.969643 | orchestrator | 2026-04-02 01:07:07.969648 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-04-02 01:07:07.969652 | orchestrator | Thursday 02 April 2026 00:58:21 +0000 (0:00:00.334) 0:00:00.334 ******** 2026-04-02 01:07:07.969656 | orchestrator | changed: [testbed-manager] 2026-04-02 01:07:07.969661 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:07:07.969665 | orchestrator | changed: [testbed-node-1] 2026-04-02 01:07:07.969668 | orchestrator | changed: [testbed-node-2] 2026-04-02 01:07:07.969672 | orchestrator | changed: [testbed-node-3] 2026-04-02 01:07:07.969676 | orchestrator | changed: [testbed-node-4] 2026-04-02 01:07:07.969680 | orchestrator | changed: [testbed-node-5] 2026-04-02 01:07:07.969684 | orchestrator | 2026-04-02 01:07:07.969688 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-02 01:07:07.969692 | orchestrator | Thursday 02 April 2026 00:58:22 +0000 (0:00:00.743) 0:00:01.077 ******** 2026-04-02 01:07:07.969696 | orchestrator | changed: [testbed-manager] 2026-04-02 01:07:07.969699 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:07:07.969703 | orchestrator | changed: [testbed-node-1] 2026-04-02 01:07:07.969707 | orchestrator | changed: [testbed-node-2] 2026-04-02 01:07:07.969711 | orchestrator | changed: [testbed-node-3] 2026-04-02 01:07:07.969715 | orchestrator | changed: [testbed-node-4] 2026-04-02 01:07:07.969719 | orchestrator | changed: [testbed-node-5] 2026-04-02 01:07:07.969723 | orchestrator | 2026-04-02 01:07:07.969727 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-02 01:07:07.969731 | orchestrator | Thursday 02 April 2026 00:58:22 +0000 (0:00:00.761) 0:00:01.838 ******** 2026-04-02 01:07:07.969734 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-04-02 01:07:07.969739 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-04-02 01:07:07.969755 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-04-02 01:07:07.969759 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-04-02 01:07:07.969763 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-04-02 01:07:07.969807 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-04-02 01:07:07.969811 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-04-02 01:07:07.969815 | orchestrator | 2026-04-02 01:07:07.969819 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-04-02 01:07:07.969823 | orchestrator | 2026-04-02 01:07:07.969827 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-04-02 01:07:07.969855 | orchestrator | Thursday 02 April 2026 00:58:24 +0000 (0:00:01.376) 0:00:03.215 ******** 2026-04-02 01:07:07.969861 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 01:07:07.969865 | orchestrator | 2026-04-02 01:07:07.969869 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-04-02 01:07:07.969873 | orchestrator | Thursday 02 April 2026 00:58:25 +0000 (0:00:00.937) 0:00:04.152 ******** 2026-04-02 01:07:07.969878 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2026-04-02 01:07:07.969882 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2026-04-02 01:07:07.969886 | orchestrator | 2026-04-02 01:07:07.969914 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-04-02 01:07:07.969919 | orchestrator | Thursday 02 April 2026 00:58:29 +0000 (0:00:04.625) 0:00:08.778 ******** 2026-04-02 01:07:07.969923 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-02 01:07:07.969927 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-02 01:07:07.969931 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:07:07.969934 | orchestrator | 2026-04-02 01:07:07.969938 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-04-02 01:07:07.969942 | orchestrator | Thursday 02 April 2026 00:58:34 +0000 (0:00:04.293) 0:00:13.071 ******** 2026-04-02 01:07:07.969946 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:07:07.969950 | orchestrator | 2026-04-02 01:07:07.969954 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-04-02 01:07:07.969958 | orchestrator | Thursday 02 April 2026 00:58:34 +0000 (0:00:00.571) 0:00:13.643 ******** 2026-04-02 01:07:07.969962 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:07:07.969966 | orchestrator | 2026-04-02 01:07:07.969970 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-04-02 01:07:07.970135 | orchestrator | Thursday 02 April 2026 00:58:35 +0000 (0:00:01.103) 0:00:14.746 ******** 2026-04-02 01:07:07.970141 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:07:07.970147 | orchestrator | 2026-04-02 01:07:07.970153 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-02 01:07:07.970159 | orchestrator | Thursday 02 April 2026 00:58:38 +0000 (0:00:02.430) 0:00:17.177 ******** 2026-04-02 01:07:07.970174 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:07:07.970181 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:07:07.970187 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:07:07.970193 | orchestrator | 2026-04-02 01:07:07.970200 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-04-02 01:07:07.970205 | orchestrator | Thursday 02 April 2026 00:58:38 +0000 (0:00:00.449) 0:00:17.626 ******** 2026-04-02 01:07:07.970210 | orchestrator | ok: [testbed-node-0] 2026-04-02 01:07:07.970214 | orchestrator | 2026-04-02 01:07:07.970217 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-04-02 01:07:07.970221 | orchestrator | Thursday 02 April 2026 00:59:12 +0000 (0:00:34.153) 0:00:51.780 ******** 2026-04-02 01:07:07.970225 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:07:07.970229 | orchestrator | 2026-04-02 01:07:07.970233 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-04-02 01:07:07.970237 | orchestrator | Thursday 02 April 2026 00:59:28 +0000 (0:00:15.509) 0:01:07.289 ******** 2026-04-02 01:07:07.970246 | orchestrator | ok: [testbed-node-0] 2026-04-02 01:07:07.970250 | orchestrator | 2026-04-02 01:07:07.970254 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-04-02 01:07:07.970258 | orchestrator | Thursday 02 April 2026 00:59:41 +0000 (0:00:13.511) 0:01:20.800 ******** 2026-04-02 01:07:07.970270 | orchestrator | ok: [testbed-node-0] 2026-04-02 01:07:07.970274 | orchestrator | 2026-04-02 01:07:07.970278 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-04-02 01:07:07.970402 | orchestrator | Thursday 02 April 2026 00:59:42 +0000 (0:00:00.589) 0:01:21.390 ******** 2026-04-02 01:07:07.970409 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:07:07.970413 | orchestrator | 2026-04-02 01:07:07.970417 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-02 01:07:07.970421 | orchestrator | Thursday 02 April 2026 00:59:42 +0000 (0:00:00.408) 0:01:21.798 ******** 2026-04-02 01:07:07.970425 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 01:07:07.970429 | orchestrator | 2026-04-02 01:07:07.970432 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-04-02 01:07:07.970436 | orchestrator | Thursday 02 April 2026 00:59:43 +0000 (0:00:00.528) 0:01:22.326 ******** 2026-04-02 01:07:07.970440 | orchestrator | ok: [testbed-node-0] 2026-04-02 01:07:07.970444 | orchestrator | 2026-04-02 01:07:07.970448 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-04-02 01:07:07.970451 | orchestrator | Thursday 02 April 2026 01:00:03 +0000 (0:00:19.764) 0:01:42.091 ******** 2026-04-02 01:07:07.970455 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:07:07.970459 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:07:07.970463 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:07:07.970467 | orchestrator | 2026-04-02 01:07:07.970470 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-04-02 01:07:07.970474 | orchestrator | 2026-04-02 01:07:07.970508 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-04-02 01:07:07.970513 | orchestrator | Thursday 02 April 2026 01:00:03 +0000 (0:00:00.315) 0:01:42.407 ******** 2026-04-02 01:07:07.970517 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 01:07:07.970521 | orchestrator | 2026-04-02 01:07:07.970524 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-04-02 01:07:07.970528 | orchestrator | Thursday 02 April 2026 01:00:04 +0000 (0:00:01.038) 0:01:43.445 ******** 2026-04-02 01:07:07.970532 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:07:07.970536 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:07:07.970540 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:07:07.970544 | orchestrator | 2026-04-02 01:07:07.970548 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-04-02 01:07:07.970551 | orchestrator | Thursday 02 April 2026 01:00:06 +0000 (0:00:02.110) 0:01:45.555 ******** 2026-04-02 01:07:07.970555 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:07:07.970559 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:07:07.970563 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:07:07.970567 | orchestrator | 2026-04-02 01:07:07.970571 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-04-02 01:07:07.970575 | orchestrator | Thursday 02 April 2026 01:00:08 +0000 (0:00:02.064) 0:01:47.620 ******** 2026-04-02 01:07:07.970579 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:07:07.970583 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:07:07.970586 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:07:07.970591 | orchestrator | 2026-04-02 01:07:07.970595 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-04-02 01:07:07.970598 | orchestrator | Thursday 02 April 2026 01:00:09 +0000 (0:00:00.368) 0:01:47.989 ******** 2026-04-02 01:07:07.970602 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-02 01:07:07.970606 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:07:07.970614 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-02 01:07:07.970618 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:07:07.970622 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-02 01:07:07.970626 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-04-02 01:07:07.970630 | orchestrator | 2026-04-02 01:07:07.970634 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-04-02 01:07:07.970640 | orchestrator | Thursday 02 April 2026 01:00:17 +0000 (0:00:08.527) 0:01:56.517 ******** 2026-04-02 01:07:07.970646 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:07:07.970652 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:07:07.970662 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:07:07.970669 | orchestrator | 2026-04-02 01:07:07.970676 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-04-02 01:07:07.970682 | orchestrator | Thursday 02 April 2026 01:00:18 +0000 (0:00:00.459) 0:01:56.976 ******** 2026-04-02 01:07:07.970689 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-02 01:07:07.970696 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:07:07.970725 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-04-02 01:07:07.970731 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:07:07.970866 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-02 01:07:07.970871 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:07:07.970875 | orchestrator | 2026-04-02 01:07:07.970879 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-04-02 01:07:07.970883 | orchestrator | Thursday 02 April 2026 01:00:19 +0000 (0:00:01.290) 0:01:58.266 ******** 2026-04-02 01:07:07.970887 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:07:07.970891 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:07:07.970895 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:07:07.970899 | orchestrator | 2026-04-02 01:07:07.970903 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-04-02 01:07:07.970907 | orchestrator | Thursday 02 April 2026 01:00:19 +0000 (0:00:00.490) 0:01:58.757 ******** 2026-04-02 01:07:07.970911 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:07:07.970915 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:07:07.970918 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:07:07.970922 | orchestrator | 2026-04-02 01:07:07.970926 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-04-02 01:07:07.970930 | orchestrator | Thursday 02 April 2026 01:00:20 +0000 (0:00:01.010) 0:01:59.768 ******** 2026-04-02 01:07:07.970934 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:07:07.970938 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:07:07.970955 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:07:07.970959 | orchestrator | 2026-04-02 01:07:07.970963 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-04-02 01:07:07.970967 | orchestrator | Thursday 02 April 2026 01:00:23 +0000 (0:00:02.198) 0:02:01.966 ******** 2026-04-02 01:07:07.970971 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:07:07.970975 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:07:07.970979 | orchestrator | ok: [testbed-node-0] 2026-04-02 01:07:07.970983 | orchestrator | 2026-04-02 01:07:07.970986 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-04-02 01:07:07.970990 | orchestrator | Thursday 02 April 2026 01:00:45 +0000 (0:00:22.760) 0:02:24.727 ******** 2026-04-02 01:07:07.970994 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:07:07.970998 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:07:07.971002 | orchestrator | ok: [testbed-node-0] 2026-04-02 01:07:07.971006 | orchestrator | 2026-04-02 01:07:07.971009 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-04-02 01:07:07.971013 | orchestrator | Thursday 02 April 2026 01:00:59 +0000 (0:00:13.373) 0:02:38.101 ******** 2026-04-02 01:07:07.971080 | orchestrator | ok: [testbed-node-0] 2026-04-02 01:07:07.971087 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:07:07.971120 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:07:07.971126 | orchestrator | 2026-04-02 01:07:07.971130 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-04-02 01:07:07.971134 | orchestrator | Thursday 02 April 2026 01:01:00 +0000 (0:00:00.785) 0:02:38.887 ******** 2026-04-02 01:07:07.971138 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:07:07.971142 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:07:07.971146 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:07:07.971149 | orchestrator | 2026-04-02 01:07:07.971153 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-04-02 01:07:07.971157 | orchestrator | Thursday 02 April 2026 01:01:14 +0000 (0:00:14.278) 0:02:53.166 ******** 2026-04-02 01:07:07.971161 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:07:07.971165 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:07:07.971169 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:07:07.971173 | orchestrator | 2026-04-02 01:07:07.971176 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-04-02 01:07:07.971180 | orchestrator | Thursday 02 April 2026 01:01:16 +0000 (0:00:01.775) 0:02:54.941 ******** 2026-04-02 01:07:07.971184 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:07:07.971188 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:07:07.971192 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:07:07.971195 | orchestrator | 2026-04-02 01:07:07.971199 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-04-02 01:07:07.971203 | orchestrator | 2026-04-02 01:07:07.971207 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-02 01:07:07.971211 | orchestrator | Thursday 02 April 2026 01:01:16 +0000 (0:00:00.531) 0:02:55.473 ******** 2026-04-02 01:07:07.971215 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 01:07:07.971219 | orchestrator | 2026-04-02 01:07:07.971223 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2026-04-02 01:07:07.971249 | orchestrator | Thursday 02 April 2026 01:01:17 +0000 (0:00:00.651) 0:02:56.124 ******** 2026-04-02 01:07:07.971253 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-04-02 01:07:07.971274 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2026-04-02 01:07:07.971593 | orchestrator | 2026-04-02 01:07:07.971610 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2026-04-02 01:07:07.971614 | orchestrator | Thursday 02 April 2026 01:01:20 +0000 (0:00:03.725) 0:02:59.850 ******** 2026-04-02 01:07:07.971619 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-04-02 01:07:07.971624 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-04-02 01:07:07.971628 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-04-02 01:07:07.971632 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-04-02 01:07:07.971637 | orchestrator | 2026-04-02 01:07:07.971644 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-04-02 01:07:07.971653 | orchestrator | Thursday 02 April 2026 01:01:28 +0000 (0:00:07.686) 0:03:07.536 ******** 2026-04-02 01:07:07.971660 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-02 01:07:07.971667 | orchestrator | 2026-04-02 01:07:07.971677 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-04-02 01:07:07.971683 | orchestrator | Thursday 02 April 2026 01:01:32 +0000 (0:00:03.897) 0:03:11.434 ******** 2026-04-02 01:07:07.971690 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-04-02 01:07:07.971697 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-02 01:07:07.971703 | orchestrator | 2026-04-02 01:07:07.971707 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-04-02 01:07:07.971716 | orchestrator | Thursday 02 April 2026 01:01:37 +0000 (0:00:04.520) 0:03:15.955 ******** 2026-04-02 01:07:07.971720 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-02 01:07:07.971724 | orchestrator | 2026-04-02 01:07:07.971728 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2026-04-02 01:07:07.971731 | orchestrator | Thursday 02 April 2026 01:01:40 +0000 (0:00:03.691) 0:03:19.646 ******** 2026-04-02 01:07:07.971735 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2026-04-02 01:07:07.971739 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2026-04-02 01:07:07.971743 | orchestrator | 2026-04-02 01:07:07.971748 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-04-02 01:07:07.971818 | orchestrator | Thursday 02 April 2026 01:01:48 +0000 (0:00:08.143) 0:03:27.790 ******** 2026-04-02 01:07:07.971840 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-02 01:07:07.971849 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-02 01:07:07.971854 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-02 01:07:07.971880 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-02 01:07:07.971886 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-02 01:07:07.971890 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-02 01:07:07.971895 | orchestrator | 2026-04-02 01:07:07.971899 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-04-02 01:07:07.971903 | orchestrator | Thursday 02 April 2026 01:01:51 +0000 (0:00:02.927) 0:03:30.718 ******** 2026-04-02 01:07:07.971906 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:07:07.971910 | orchestrator | 2026-04-02 01:07:07.971914 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-04-02 01:07:07.971918 | orchestrator | Thursday 02 April 2026 01:01:52 +0000 (0:00:00.177) 0:03:30.896 ******** 2026-04-02 01:07:07.971922 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:07:07.971926 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:07:07.971929 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:07:07.971933 | orchestrator | 2026-04-02 01:07:07.971937 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-04-02 01:07:07.971941 | orchestrator | Thursday 02 April 2026 01:01:52 +0000 (0:00:00.370) 0:03:31.266 ******** 2026-04-02 01:07:07.971945 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-02 01:07:07.971948 | orchestrator | 2026-04-02 01:07:07.971952 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-04-02 01:07:07.971956 | orchestrator | Thursday 02 April 2026 01:01:53 +0000 (0:00:00.807) 0:03:32.074 ******** 2026-04-02 01:07:07.971960 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:07:07.971964 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:07:07.971967 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:07:07.971971 | orchestrator | 2026-04-02 01:07:07.971975 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-02 01:07:07.971984 | orchestrator | Thursday 02 April 2026 01:01:53 +0000 (0:00:00.715) 0:03:32.789 ******** 2026-04-02 01:07:07.971991 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 01:07:07.971998 | orchestrator | 2026-04-02 01:07:07.972004 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-04-02 01:07:07.972011 | orchestrator | Thursday 02 April 2026 01:01:54 +0000 (0:00:01.060) 0:03:33.850 ******** 2026-04-02 01:07:07.972021 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-02 01:07:07.972045 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-02 01:07:07.972054 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-02 01:07:07.972066 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-02 01:07:07.972076 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-02 01:07:07.972096 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-02 01:07:07.972101 | orchestrator | 2026-04-02 01:07:07.972105 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-04-02 01:07:07.972109 | orchestrator | Thursday 02 April 2026 01:01:57 +0000 (0:00:02.922) 0:03:36.773 ******** 2026-04-02 01:07:07.972113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-02 01:07:07.972117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-02 01:07:07.972124 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:07:07.972131 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-02 01:07:07.972136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-02 01:07:07.972139 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:07:07.972154 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-02 01:07:07.972159 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-02 01:07:07.972163 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:07:07.972167 | orchestrator | 2026-04-02 01:07:07.972173 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-04-02 01:07:07.972177 | orchestrator | Thursday 02 April 2026 01:01:59 +0000 (0:00:01.746) 0:03:38.520 ******** 2026-04-02 01:07:07.972181 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-02 01:07:07.972187 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-02 01:07:07.972191 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:07:07.972205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-02 01:07:07.972210 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-02 01:07:07.972214 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:07:07.972223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-02 01:07:07.972229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-02 01:07:07.972233 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:07:07.972237 | orchestrator | 2026-04-02 01:07:07.972241 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-04-02 01:07:07.972245 | orchestrator | Thursday 02 April 2026 01:02:01 +0000 (0:00:02.261) 0:03:40.781 ******** 2026-04-02 01:07:07.972262 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-02 01:07:07.972269 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-02 01:07:07.972280 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-02 01:07:07.972289 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-02 01:07:07.972321 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-02 01:07:07.972329 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-02 01:07:07.972335 | orchestrator | 2026-04-02 01:07:07.972340 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-04-02 01:07:07.972347 | orchestrator | Thursday 02 April 2026 01:02:04 +0000 (0:00:02.774) 0:03:43.556 ******** 2026-04-02 01:07:07.972353 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-02 01:07:07.972368 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-02 01:07:07.972391 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-02 01:07:07.972398 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-02 01:07:07.972405 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-02 01:07:07.972409 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-02 01:07:07.972413 | orchestrator | 2026-04-02 01:07:07.972417 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-04-02 01:07:07.972421 | orchestrator | Thursday 02 April 2026 01:02:13 +0000 (0:00:08.864) 0:03:52.421 ******** 2026-04-02 01:07:07.972427 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-02 01:07:07.972441 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-02 01:07:07.972446 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:07:07.972453 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-02 01:07:07.972465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-02 01:07:07.972473 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:07:07.972481 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-02 01:07:07.972492 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-02 01:07:07.972500 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:07:07.972508 | orchestrator | 2026-04-02 01:07:07.972515 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-04-02 01:07:07.972522 | orchestrator | Thursday 02 April 2026 01:02:14 +0000 (0:00:01.025) 0:03:53.447 ******** 2026-04-02 01:07:07.972527 | orchestrator | changed: [testbed-node-1] 2026-04-02 01:07:07.972532 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:07:07.972537 | orchestrator | changed: [testbed-node-2] 2026-04-02 01:07:07.972542 | orchestrator | 2026-04-02 01:07:07.972557 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2026-04-02 01:07:07.972562 | orchestrator | Thursday 02 April 2026 01:02:16 +0000 (0:00:02.397) 0:03:55.844 ******** 2026-04-02 01:07:07.972566 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:07:07.972572 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:07:07.972576 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:07:07.972580 | orchestrator | 2026-04-02 01:07:07.972584 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2026-04-02 01:07:07.972587 | orchestrator | Thursday 02 April 2026 01:02:17 +0000 (0:00:00.269) 0:03:56.114 ******** 2026-04-02 01:07:07.972592 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-02 01:07:07.972598 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-02 01:07:07.972621 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-02 01:07:07.972636 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-02 01:07:07.972644 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-02 01:07:07.972651 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-02 01:07:07.972657 | orchestrator | 2026-04-02 01:07:07.972663 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-04-02 01:07:07.972670 | orchestrator | Thursday 02 April 2026 01:02:19 +0000 (0:00:02.213) 0:03:58.327 ******** 2026-04-02 01:07:07.972677 | orchestrator | 2026-04-02 01:07:07.972683 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-04-02 01:07:07.972691 | orchestrator | Thursday 02 April 2026 01:02:19 +0000 (0:00:00.201) 0:03:58.528 ******** 2026-04-02 01:07:07.972695 | orchestrator | 2026-04-02 01:07:07.972698 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-04-02 01:07:07.972702 | orchestrator | Thursday 02 April 2026 01:02:19 +0000 (0:00:00.254) 0:03:58.783 ******** 2026-04-02 01:07:07.972706 | orchestrator | 2026-04-02 01:07:07.972710 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-04-02 01:07:07.972713 | orchestrator | Thursday 02 April 2026 01:02:20 +0000 (0:00:00.215) 0:03:58.999 ******** 2026-04-02 01:07:07.972717 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:07:07.972721 | orchestrator | changed: [testbed-node-2] 2026-04-02 01:07:07.972725 | orchestrator | changed: [testbed-node-1] 2026-04-02 01:07:07.972728 | orchestrator | 2026-04-02 01:07:07.972733 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-04-02 01:07:07.972740 | orchestrator | Thursday 02 April 2026 01:02:40 +0000 (0:00:20.327) 0:04:19.327 ******** 2026-04-02 01:07:07.972746 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:07:07.972752 | orchestrator | changed: [testbed-node-1] 2026-04-02 01:07:07.972759 | orchestrator | changed: [testbed-node-2] 2026-04-02 01:07:07.972766 | orchestrator | 2026-04-02 01:07:07.972772 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-04-02 01:07:07.972779 | orchestrator | 2026-04-02 01:07:07.972783 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-02 01:07:07.972787 | orchestrator | Thursday 02 April 2026 01:02:52 +0000 (0:00:12.214) 0:04:31.542 ******** 2026-04-02 01:07:07.972791 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 01:07:07.972803 | orchestrator | 2026-04-02 01:07:07.972813 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-02 01:07:07.972819 | orchestrator | Thursday 02 April 2026 01:02:53 +0000 (0:00:01.066) 0:04:32.608 ******** 2026-04-02 01:07:07.972825 | orchestrator | skipping: [testbed-node-3] 2026-04-02 01:07:07.972832 | orchestrator | skipping: [testbed-node-4] 2026-04-02 01:07:07.972838 | orchestrator | skipping: [testbed-node-5] 2026-04-02 01:07:07.972845 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:07:07.972852 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:07:07.972858 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:07:07.972864 | orchestrator | 2026-04-02 01:07:07.972870 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-04-02 01:07:07.972876 | orchestrator | Thursday 02 April 2026 01:02:54 +0000 (0:00:00.697) 0:04:33.306 ******** 2026-04-02 01:07:07.972882 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:07:07.972888 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:07:07.972894 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:07:07.972900 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-02 01:07:07.972906 | orchestrator | 2026-04-02 01:07:07.972912 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-02 01:07:07.972939 | orchestrator | Thursday 02 April 2026 01:02:55 +0000 (0:00:01.001) 0:04:34.307 ******** 2026-04-02 01:07:07.972947 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-04-02 01:07:07.972954 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-04-02 01:07:07.972960 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-04-02 01:07:07.972967 | orchestrator | 2026-04-02 01:07:07.972973 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-02 01:07:07.972979 | orchestrator | Thursday 02 April 2026 01:02:56 +0000 (0:00:01.020) 0:04:35.327 ******** 2026-04-02 01:07:07.972985 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2026-04-02 01:07:07.972989 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2026-04-02 01:07:07.972992 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2026-04-02 01:07:07.972996 | orchestrator | 2026-04-02 01:07:07.973000 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-02 01:07:07.973004 | orchestrator | Thursday 02 April 2026 01:02:57 +0000 (0:00:01.332) 0:04:36.660 ******** 2026-04-02 01:07:07.973008 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-04-02 01:07:07.973011 | orchestrator | skipping: [testbed-node-3] 2026-04-02 01:07:07.973015 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-04-02 01:07:07.973019 | orchestrator | skipping: [testbed-node-4] 2026-04-02 01:07:07.973023 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-04-02 01:07:07.973027 | orchestrator | skipping: [testbed-node-5] 2026-04-02 01:07:07.973030 | orchestrator | 2026-04-02 01:07:07.973034 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-04-02 01:07:07.973038 | orchestrator | Thursday 02 April 2026 01:02:58 +0000 (0:00:00.697) 0:04:37.357 ******** 2026-04-02 01:07:07.973042 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-02 01:07:07.973046 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-02 01:07:07.973050 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:07:07.973053 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-02 01:07:07.973057 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-02 01:07:07.973061 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:07:07.973065 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-02 01:07:07.973069 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-02 01:07:07.973131 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:07:07.973143 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-04-02 01:07:07.973147 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-04-02 01:07:07.973151 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-04-02 01:07:07.973155 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-04-02 01:07:07.973159 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-04-02 01:07:07.973163 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-04-02 01:07:07.973169 | orchestrator | 2026-04-02 01:07:07.973176 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-04-02 01:07:07.973186 | orchestrator | Thursday 02 April 2026 01:02:59 +0000 (0:00:01.173) 0:04:38.531 ******** 2026-04-02 01:07:07.973193 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:07:07.973199 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:07:07.973206 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:07:07.973213 | orchestrator | changed: [testbed-node-3] 2026-04-02 01:07:07.973219 | orchestrator | changed: [testbed-node-4] 2026-04-02 01:07:07.973225 | orchestrator | changed: [testbed-node-5] 2026-04-02 01:07:07.973231 | orchestrator | 2026-04-02 01:07:07.973236 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-04-02 01:07:07.973242 | orchestrator | Thursday 02 April 2026 01:03:00 +0000 (0:00:01.316) 0:04:39.847 ******** 2026-04-02 01:07:07.973248 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:07:07.973255 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:07:07.973261 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:07:07.973267 | orchestrator | changed: [testbed-node-4] 2026-04-02 01:07:07.973273 | orchestrator | changed: [testbed-node-5] 2026-04-02 01:07:07.973280 | orchestrator | changed: [testbed-node-3] 2026-04-02 01:07:07.973287 | orchestrator | 2026-04-02 01:07:07.973293 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-04-02 01:07:07.973300 | orchestrator | Thursday 02 April 2026 01:03:02 +0000 (0:00:01.875) 0:04:41.722 ******** 2026-04-02 01:07:07.973347 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-02 01:07:07.973376 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-02 01:07:07.973382 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-02 01:07:07.973391 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-02 01:07:07.973396 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-02 01:07:07.973403 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-02 01:07:07.973407 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-02 01:07:07.973422 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-02 01:07:07.973427 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-02 01:07:07.973434 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-02 01:07:07.973439 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-02 01:07:07.973445 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-02 01:07:07.973449 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-02 01:07:07.973464 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-02 01:07:07.973469 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-02 01:07:07.973514 | orchestrator | 2026-04-02 01:07:07.973520 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-02 01:07:07.973524 | orchestrator | Thursday 02 April 2026 01:03:06 +0000 (0:00:03.521) 0:04:45.243 ******** 2026-04-02 01:07:07.973528 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 01:07:07.973533 | orchestrator | 2026-04-02 01:07:07.973537 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-04-02 01:07:07.973540 | orchestrator | Thursday 02 April 2026 01:03:07 +0000 (0:00:01.193) 0:04:46.437 ******** 2026-04-02 01:07:07.973545 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-02 01:07:07.973552 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-02 01:07:07.973556 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-02 01:07:07.973572 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-02 01:07:07.973580 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-02 01:07:07.973584 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-02 01:07:07.973588 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-02 01:07:07.973592 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-02 01:07:07.973598 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-02 01:07:07.973613 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-02 01:07:07.973620 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-02 01:07:07.973624 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-02 01:07:07.973628 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-02 01:07:07.973632 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-02 01:07:07.973638 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-02 01:07:07.973642 | orchestrator | 2026-04-02 01:07:07.973646 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-04-02 01:07:07.973650 | orchestrator | Thursday 02 April 2026 01:03:11 +0000 (0:00:04.057) 0:04:50.494 ******** 2026-04-02 01:07:07.973666 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-02 01:07:07.973673 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-02 01:07:07.973677 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-02 01:07:07.973681 | orchestrator | skipping: [testbed-node-3] 2026-04-02 01:07:07.973685 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-02 01:07:07.973691 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-02 01:07:07.973706 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-02 01:07:07.973713 | orchestrator | skipping: [testbed-node-4] 2026-04-02 01:07:07.973717 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-02 01:07:07.973722 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-02 01:07:07.973726 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-02 01:07:07.973729 | orchestrator | skipping: [testbed-node-5] 2026-04-02 01:07:07.973733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-02 01:07:07.973741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-02 01:07:07.973750 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:07:07.973764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-02 01:07:07.973770 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-02 01:07:07.973776 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:07:07.973783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-02 01:07:07.973789 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-02 01:07:07.973795 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:07:07.973801 | orchestrator | 2026-04-02 01:07:07.973806 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-04-02 01:07:07.973812 | orchestrator | Thursday 02 April 2026 01:03:13 +0000 (0:00:01.546) 0:04:52.041 ******** 2026-04-02 01:07:07.973818 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-02 01:07:07.973828 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-02 01:07:07.973856 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-02 01:07:07.973862 | orchestrator | skipping: [testbed-node-3] 2026-04-02 01:07:07.973867 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-02 01:07:07.973871 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-02 01:07:07.973875 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-02 01:07:07.973879 | orchestrator | skipping: [testbed-node-4] 2026-04-02 01:07:07.973885 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-02 01:07:07.973892 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-02 01:07:07.973908 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-02 01:07:07.973912 | orchestrator | skipping: [testbed-node-5] 2026-04-02 01:07:07.973916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-02 01:07:07.973921 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-02 01:07:07.973924 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:07:07.973928 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-02 01:07:07.973933 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-02 01:07:07.973939 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:07:07.973945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-02 01:07:07.973988 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-02 01:07:07.973995 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:07:07.973999 | orchestrator | 2026-04-02 01:07:07.974002 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-02 01:07:07.974006 | orchestrator | Thursday 02 April 2026 01:03:15 +0000 (0:00:02.020) 0:04:54.062 ******** 2026-04-02 01:07:07.974010 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:07:07.974032 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:07:07.974036 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:07:07.974040 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-02 01:07:07.974044 | orchestrator | 2026-04-02 01:07:07.974048 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-04-02 01:07:07.974052 | orchestrator | Thursday 02 April 2026 01:03:16 +0000 (0:00:01.007) 0:04:55.070 ******** 2026-04-02 01:07:07.974056 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-02 01:07:07.974059 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-02 01:07:07.974063 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-02 01:07:07.974067 | orchestrator | 2026-04-02 01:07:07.974071 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-04-02 01:07:07.974075 | orchestrator | Thursday 02 April 2026 01:03:17 +0000 (0:00:01.395) 0:04:56.465 ******** 2026-04-02 01:07:07.974078 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-02 01:07:07.974082 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-02 01:07:07.974086 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-02 01:07:07.974090 | orchestrator | 2026-04-02 01:07:07.974093 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-04-02 01:07:07.974097 | orchestrator | Thursday 02 April 2026 01:03:19 +0000 (0:00:01.843) 0:04:58.308 ******** 2026-04-02 01:07:07.974101 | orchestrator | ok: [testbed-node-3] 2026-04-02 01:07:07.974105 | orchestrator | ok: [testbed-node-4] 2026-04-02 01:07:07.974109 | orchestrator | ok: [testbed-node-5] 2026-04-02 01:07:07.974113 | orchestrator | 2026-04-02 01:07:07.974117 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-04-02 01:07:07.974120 | orchestrator | Thursday 02 April 2026 01:03:19 +0000 (0:00:00.557) 0:04:58.866 ******** 2026-04-02 01:07:07.974124 | orchestrator | ok: [testbed-node-3] 2026-04-02 01:07:07.974128 | orchestrator | ok: [testbed-node-4] 2026-04-02 01:07:07.974135 | orchestrator | ok: [testbed-node-5] 2026-04-02 01:07:07.974139 | orchestrator | 2026-04-02 01:07:07.974143 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-04-02 01:07:07.974147 | orchestrator | Thursday 02 April 2026 01:03:20 +0000 (0:00:00.518) 0:04:59.384 ******** 2026-04-02 01:07:07.974151 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-04-02 01:07:07.974155 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-04-02 01:07:07.974158 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-04-02 01:07:07.974162 | orchestrator | 2026-04-02 01:07:07.974167 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-04-02 01:07:07.974173 | orchestrator | Thursday 02 April 2026 01:03:21 +0000 (0:00:01.357) 0:05:00.741 ******** 2026-04-02 01:07:07.974180 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-04-02 01:07:07.974187 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-04-02 01:07:07.974193 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-04-02 01:07:07.974199 | orchestrator | 2026-04-02 01:07:07.974205 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-04-02 01:07:07.974212 | orchestrator | Thursday 02 April 2026 01:03:23 +0000 (0:00:01.320) 0:05:02.062 ******** 2026-04-02 01:07:07.974219 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-04-02 01:07:07.974226 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-04-02 01:07:07.974234 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-04-02 01:07:07.974241 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2026-04-02 01:07:07.974249 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2026-04-02 01:07:07.974256 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2026-04-02 01:07:07.974263 | orchestrator | 2026-04-02 01:07:07.974270 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-04-02 01:07:07.974277 | orchestrator | Thursday 02 April 2026 01:03:26 +0000 (0:00:03.618) 0:05:05.681 ******** 2026-04-02 01:07:07.974285 | orchestrator | skipping: [testbed-node-3] 2026-04-02 01:07:07.974296 | orchestrator | skipping: [testbed-node-4] 2026-04-02 01:07:07.974304 | orchestrator | skipping: [testbed-node-5] 2026-04-02 01:07:07.974324 | orchestrator | 2026-04-02 01:07:07.974329 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-04-02 01:07:07.974334 | orchestrator | Thursday 02 April 2026 01:03:27 +0000 (0:00:00.304) 0:05:05.985 ******** 2026-04-02 01:07:07.974339 | orchestrator | skipping: [testbed-node-3] 2026-04-02 01:07:07.974343 | orchestrator | skipping: [testbed-node-4] 2026-04-02 01:07:07.974348 | orchestrator | skipping: [testbed-node-5] 2026-04-02 01:07:07.974352 | orchestrator | 2026-04-02 01:07:07.974357 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-04-02 01:07:07.974361 | orchestrator | Thursday 02 April 2026 01:03:27 +0000 (0:00:00.288) 0:05:06.274 ******** 2026-04-02 01:07:07.974368 | orchestrator | changed: [testbed-node-3] 2026-04-02 01:07:07.974375 | orchestrator | changed: [testbed-node-5] 2026-04-02 01:07:07.974381 | orchestrator | changed: [testbed-node-4] 2026-04-02 01:07:07.974389 | orchestrator | 2026-04-02 01:07:07.974396 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-04-02 01:07:07.974403 | orchestrator | Thursday 02 April 2026 01:03:28 +0000 (0:00:01.581) 0:05:07.856 ******** 2026-04-02 01:07:07.974439 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-04-02 01:07:07.974450 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-04-02 01:07:07.974456 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-04-02 01:07:07.974464 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-04-02 01:07:07.974473 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-04-02 01:07:07.974477 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-04-02 01:07:07.974482 | orchestrator | 2026-04-02 01:07:07.974487 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-04-02 01:07:07.974491 | orchestrator | Thursday 02 April 2026 01:03:33 +0000 (0:00:04.901) 0:05:12.757 ******** 2026-04-02 01:07:07.974496 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-02 01:07:07.974501 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-02 01:07:07.974505 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-02 01:07:07.974510 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-02 01:07:07.974515 | orchestrator | changed: [testbed-node-3] 2026-04-02 01:07:07.974519 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-02 01:07:07.974524 | orchestrator | changed: [testbed-node-4] 2026-04-02 01:07:07.974529 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-02 01:07:07.974533 | orchestrator | changed: [testbed-node-5] 2026-04-02 01:07:07.974538 | orchestrator | 2026-04-02 01:07:07.974543 | orchestrator | TASK [nova-cell : Include tasks from qemu_wrapper.yml] ************************* 2026-04-02 01:07:07.974548 | orchestrator | Thursday 02 April 2026 01:03:39 +0000 (0:00:05.187) 0:05:17.944 ******** 2026-04-02 01:07:07.974552 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:07:07.974557 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:07:07.974561 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:07:07.974566 | orchestrator | included: /ansible/roles/nova-cell/tasks/qemu_wrapper.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-02 01:07:07.974571 | orchestrator | 2026-04-02 01:07:07.974575 | orchestrator | TASK [nova-cell : Check qemu wrapper file] ************************************* 2026-04-02 01:07:07.974580 | orchestrator | Thursday 02 April 2026 01:03:41 +0000 (0:00:01.986) 0:05:19.931 ******** 2026-04-02 01:07:07.974584 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-02 01:07:07.974588 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-02 01:07:07.974591 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-02 01:07:07.974595 | orchestrator | 2026-04-02 01:07:07.974599 | orchestrator | TASK [nova-cell : Copy qemu wrapper] ******************************************* 2026-04-02 01:07:07.974603 | orchestrator | Thursday 02 April 2026 01:03:41 +0000 (0:00:00.783) 0:05:20.715 ******** 2026-04-02 01:07:07.974606 | orchestrator | skipping: [testbed-node-3] 2026-04-02 01:07:07.974610 | orchestrator | skipping: [testbed-node-4] 2026-04-02 01:07:07.974614 | orchestrator | skipping: [testbed-node-5] 2026-04-02 01:07:07.974618 | orchestrator | 2026-04-02 01:07:07.974621 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-04-02 01:07:07.974625 | orchestrator | Thursday 02 April 2026 01:03:42 +0000 (0:00:00.223) 0:05:20.939 ******** 2026-04-02 01:07:07.974629 | orchestrator | skipping: [testbed-node-3] 2026-04-02 01:07:07.974633 | orchestrator | 2026-04-02 01:07:07.974637 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-04-02 01:07:07.974643 | orchestrator | Thursday 02 April 2026 01:03:42 +0000 (0:00:00.099) 0:05:21.038 ******** 2026-04-02 01:07:07.974652 | orchestrator | skipping: [testbed-node-3] 2026-04-02 01:07:07.974661 | orchestrator | skipping: [testbed-node-4] 2026-04-02 01:07:07.974667 | orchestrator | skipping: [testbed-node-5] 2026-04-02 01:07:07.974673 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:07:07.974679 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:07:07.974686 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:07:07.974693 | orchestrator | 2026-04-02 01:07:07.974699 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-04-02 01:07:07.974706 | orchestrator | Thursday 02 April 2026 01:03:42 +0000 (0:00:00.572) 0:05:21.610 ******** 2026-04-02 01:07:07.974717 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-02 01:07:07.974722 | orchestrator | 2026-04-02 01:07:07.974726 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-04-02 01:07:07.974732 | orchestrator | Thursday 02 April 2026 01:03:43 +0000 (0:00:00.696) 0:05:22.307 ******** 2026-04-02 01:07:07.974736 | orchestrator | skipping: [testbed-node-3] 2026-04-02 01:07:07.974740 | orchestrator | skipping: [testbed-node-4] 2026-04-02 01:07:07.974744 | orchestrator | skipping: [testbed-node-5] 2026-04-02 01:07:07.974747 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:07:07.974751 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:07:07.974755 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:07:07.974759 | orchestrator | 2026-04-02 01:07:07.974763 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-04-02 01:07:07.974766 | orchestrator | Thursday 02 April 2026 01:03:43 +0000 (0:00:00.463) 0:05:22.770 ******** 2026-04-02 01:07:07.974775 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-02 01:07:07.974780 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-02 01:07:07.974785 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-02 01:07:07.974789 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-02 01:07:07.974796 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-02 01:07:07.974809 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-02 01:07:07.974819 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-02 01:07:07.974824 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-02 01:07:07.974828 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-02 01:07:07.974832 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-02 01:07:07.974836 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-02 01:07:07.974847 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-02 01:07:07.974854 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-02 01:07:07.974858 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-02 01:07:07.974863 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-02 01:07:07.974866 | orchestrator | 2026-04-02 01:07:07.974870 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-04-02 01:07:07.974874 | orchestrator | Thursday 02 April 2026 01:03:48 +0000 (0:00:04.743) 0:05:27.514 ******** 2026-04-02 01:07:07.974878 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-02 01:07:07.974885 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-02 01:07:07.974891 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-02 01:07:07.974898 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-02 01:07:07.974902 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-02 01:07:07.974906 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-02 01:07:07.974910 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-02 01:07:07.974919 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-02 01:07:07.974925 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-02 01:07:07.974929 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-02 01:07:07.974933 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-02 01:07:07.974937 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-02 01:07:07.974944 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-02 01:07:07.974948 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-02 01:07:07.974953 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-02 01:07:07.974957 | orchestrator | 2026-04-02 01:07:07.974961 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-04-02 01:07:07.974965 | orchestrator | Thursday 02 April 2026 01:03:54 +0000 (0:00:06.084) 0:05:33.598 ******** 2026-04-02 01:07:07.974969 | orchestrator | skipping: [testbed-node-3] 2026-04-02 01:07:07.974973 | orchestrator | skipping: [testbed-node-4] 2026-04-02 01:07:07.974977 | orchestrator | skipping: [testbed-node-5] 2026-04-02 01:07:07.974980 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:07:07.974986 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:07:07.974990 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:07:07.974994 | orchestrator | 2026-04-02 01:07:07.974998 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-04-02 01:07:07.975001 | orchestrator | Thursday 02 April 2026 01:03:56 +0000 (0:00:02.012) 0:05:35.611 ******** 2026-04-02 01:07:07.975005 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-04-02 01:07:07.975009 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-04-02 01:07:07.975013 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-04-02 01:07:07.975017 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-04-02 01:07:07.975021 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-04-02 01:07:07.975024 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-04-02 01:07:07.975028 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:07:07.975032 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-04-02 01:07:07.975036 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-04-02 01:07:07.975040 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:07:07.975044 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-04-02 01:07:07.975050 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:07:07.975054 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-04-02 01:07:07.975058 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-04-02 01:07:07.975061 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-04-02 01:07:07.975065 | orchestrator | 2026-04-02 01:07:07.975069 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-04-02 01:07:07.975073 | orchestrator | Thursday 02 April 2026 01:04:01 +0000 (0:00:04.702) 0:05:40.314 ******** 2026-04-02 01:07:07.975077 | orchestrator | skipping: [testbed-node-3] 2026-04-02 01:07:07.975081 | orchestrator | skipping: [testbed-node-4] 2026-04-02 01:07:07.975084 | orchestrator | skipping: [testbed-node-5] 2026-04-02 01:07:07.975088 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:07:07.975092 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:07:07.975096 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:07:07.975099 | orchestrator | 2026-04-02 01:07:07.975103 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-04-02 01:07:07.975107 | orchestrator | Thursday 02 April 2026 01:04:02 +0000 (0:00:00.661) 0:05:40.975 ******** 2026-04-02 01:07:07.975111 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-04-02 01:07:07.975115 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-04-02 01:07:07.975119 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-04-02 01:07:07.975122 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-04-02 01:07:07.975126 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-04-02 01:07:07.975130 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-04-02 01:07:07.975134 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-04-02 01:07:07.975138 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-04-02 01:07:07.975142 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-04-02 01:07:07.975145 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:07:07.975151 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-04-02 01:07:07.975155 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:07:07.975159 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-04-02 01:07:07.975163 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:07:07.975167 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-04-02 01:07:07.975170 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-04-02 01:07:07.975174 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-04-02 01:07:07.975178 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-04-02 01:07:07.975182 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-04-02 01:07:07.975187 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-04-02 01:07:07.975197 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-04-02 01:07:07.975201 | orchestrator | 2026-04-02 01:07:07.975205 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-04-02 01:07:07.975209 | orchestrator | Thursday 02 April 2026 01:04:07 +0000 (0:00:05.467) 0:05:46.442 ******** 2026-04-02 01:07:07.975212 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-04-02 01:07:07.975216 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-04-02 01:07:07.975220 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-04-02 01:07:07.975224 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-02 01:07:07.975228 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-04-02 01:07:07.975232 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-02 01:07:07.975236 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-04-02 01:07:07.975239 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-02 01:07:07.975243 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-04-02 01:07:07.975247 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-02 01:07:07.975251 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-02 01:07:07.975255 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-02 01:07:07.975259 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-02 01:07:07.975263 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-04-02 01:07:07.975266 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:07:07.975270 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-04-02 01:07:07.975274 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:07:07.975278 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-04-02 01:07:07.975281 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:07:07.975285 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-02 01:07:07.975289 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-02 01:07:07.975293 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-02 01:07:07.975297 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-02 01:07:07.975300 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-02 01:07:07.975315 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-02 01:07:07.975319 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-02 01:07:07.975323 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-02 01:07:07.975327 | orchestrator | 2026-04-02 01:07:07.975331 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-04-02 01:07:07.975335 | orchestrator | Thursday 02 April 2026 01:04:14 +0000 (0:00:07.161) 0:05:53.604 ******** 2026-04-02 01:07:07.975338 | orchestrator | skipping: [testbed-node-3] 2026-04-02 01:07:07.975342 | orchestrator | skipping: [testbed-node-4] 2026-04-02 01:07:07.975346 | orchestrator | skipping: [testbed-node-5] 2026-04-02 01:07:07.975350 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:07:07.975356 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:07:07.975360 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:07:07.975364 | orchestrator | 2026-04-02 01:07:07.975368 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-04-02 01:07:07.975374 | orchestrator | Thursday 02 April 2026 01:04:15 +0000 (0:00:00.435) 0:05:54.040 ******** 2026-04-02 01:07:07.975378 | orchestrator | skipping: [testbed-node-3] 2026-04-02 01:07:07.975381 | orchestrator | skipping: [testbed-node-4] 2026-04-02 01:07:07.975385 | orchestrator | skipping: [testbed-node-5] 2026-04-02 01:07:07.975389 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:07:07.975393 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:07:07.975396 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:07:07.975400 | orchestrator | 2026-04-02 01:07:07.975404 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-04-02 01:07:07.975408 | orchestrator | Thursday 02 April 2026 01:04:15 +0000 (0:00:00.532) 0:05:54.572 ******** 2026-04-02 01:07:07.975412 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:07:07.975415 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:07:07.975419 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:07:07.975423 | orchestrator | changed: [testbed-node-4] 2026-04-02 01:07:07.975427 | orchestrator | changed: [testbed-node-5] 2026-04-02 01:07:07.975430 | orchestrator | changed: [testbed-node-3] 2026-04-02 01:07:07.975434 | orchestrator | 2026-04-02 01:07:07.975438 | orchestrator | TASK [nova-cell : Generating 'hostid' file for nova_compute] ******************* 2026-04-02 01:07:07.975442 | orchestrator | Thursday 02 April 2026 01:04:17 +0000 (0:00:01.662) 0:05:56.234 ******** 2026-04-02 01:07:07.975446 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:07:07.975452 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:07:07.975456 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:07:07.975460 | orchestrator | changed: [testbed-node-4] 2026-04-02 01:07:07.975463 | orchestrator | changed: [testbed-node-3] 2026-04-02 01:07:07.975467 | orchestrator | changed: [testbed-node-5] 2026-04-02 01:07:07.975471 | orchestrator | 2026-04-02 01:07:07.975475 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-04-02 01:07:07.975479 | orchestrator | Thursday 02 April 2026 01:04:19 +0000 (0:00:02.162) 0:05:58.397 ******** 2026-04-02 01:07:07.975483 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-02 01:07:07.975487 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-02 01:07:07.975491 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-02 01:07:07.975498 | orchestrator | skipping: [testbed-node-4] 2026-04-02 01:07:07.975504 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-02 01:07:07.975509 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-02 01:07:07.975515 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-02 01:07:07.975519 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-02 01:07:07.975523 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-02 01:07:07.975531 | orchestrator | skipping: [testbed-node-3] 2026-04-02 01:07:07.975535 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-02 01:07:07.975539 | orchestrator | skipping: [testbed-node-5] 2026-04-02 01:07:07.975545 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-02 01:07:07.975552 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-02 01:07:07.975556 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:07:07.975560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-02 01:07:07.975564 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-02 01:07:07.975568 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:07:07.975572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-02 01:07:07.975578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-02 01:07:07.975582 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:07:07.975586 | orchestrator | 2026-04-02 01:07:07.975590 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-04-02 01:07:07.975594 | orchestrator | Thursday 02 April 2026 01:04:21 +0000 (0:00:02.009) 0:06:00.407 ******** 2026-04-02 01:07:07.975598 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-04-02 01:07:07.975602 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-04-02 01:07:07.975605 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-04-02 01:07:07.975609 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-04-02 01:07:07.975613 | orchestrator | skipping: [testbed-node-3] 2026-04-02 01:07:07.975617 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-04-02 01:07:07.975621 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-04-02 01:07:07.975626 | orchestrator | skipping: [testbed-node-4] 2026-04-02 01:07:07.975630 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-04-02 01:07:07.975634 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-04-02 01:07:07.975641 | orchestrator | skipping: [testbed-node-5] 2026-04-02 01:07:07.975647 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-04-02 01:07:07.975653 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-04-02 01:07:07.975660 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:07:07.975665 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:07:07.975671 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-04-02 01:07:07.975677 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-04-02 01:07:07.975684 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:07:07.975691 | orchestrator | 2026-04-02 01:07:07.975697 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2026-04-02 01:07:07.975704 | orchestrator | Thursday 02 April 2026 01:04:22 +0000 (0:00:00.704) 0:06:01.111 ******** 2026-04-02 01:07:07.975716 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-02 01:07:07.975721 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-02 01:07:07.975728 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-02 01:07:07.975732 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-02 01:07:07.975738 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-02 01:07:07.975745 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-02 01:07:07.975749 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-02 01:07:07.975755 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-02 01:07:07.975759 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-02 01:07:07.975764 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-02 01:07:07.975768 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-02 01:07:07.975777 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-02 01:07:07.975788 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-02 01:07:07.975795 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-02 01:07:07.975805 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-02 01:07:07.975811 | orchestrator | 2026-04-02 01:07:07.975816 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-02 01:07:07.975822 | orchestrator | Thursday 02 April 2026 01:04:25 +0000 (0:00:03.017) 0:06:04.128 ******** 2026-04-02 01:07:07.975829 | orchestrator | skipping: [testbed-node-3] 2026-04-02 01:07:07.975835 | orchestrator | skipping: [testbed-node-4] 2026-04-02 01:07:07.975842 | orchestrator | skipping: [testbed-node-5] 2026-04-02 01:07:07.975848 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:07:07.975854 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:07:07.975861 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:07:07.975867 | orchestrator | 2026-04-02 01:07:07.975872 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-02 01:07:07.975876 | orchestrator | Thursday 02 April 2026 01:04:25 +0000 (0:00:00.682) 0:06:04.811 ******** 2026-04-02 01:07:07.975880 | orchestrator | 2026-04-02 01:07:07.975883 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-02 01:07:07.975887 | orchestrator | Thursday 02 April 2026 01:04:26 +0000 (0:00:00.125) 0:06:04.936 ******** 2026-04-02 01:07:07.975891 | orchestrator | 2026-04-02 01:07:07.975895 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-02 01:07:07.975899 | orchestrator | Thursday 02 April 2026 01:04:26 +0000 (0:00:00.138) 0:06:05.075 ******** 2026-04-02 01:07:07.975902 | orchestrator | 2026-04-02 01:07:07.975906 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-02 01:07:07.975910 | orchestrator | Thursday 02 April 2026 01:04:26 +0000 (0:00:00.130) 0:06:05.205 ******** 2026-04-02 01:07:07.975914 | orchestrator | 2026-04-02 01:07:07.975917 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-02 01:07:07.975921 | orchestrator | Thursday 02 April 2026 01:04:26 +0000 (0:00:00.131) 0:06:05.336 ******** 2026-04-02 01:07:07.975925 | orchestrator | 2026-04-02 01:07:07.975929 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-02 01:07:07.975932 | orchestrator | Thursday 02 April 2026 01:04:26 +0000 (0:00:00.239) 0:06:05.575 ******** 2026-04-02 01:07:07.975936 | orchestrator | 2026-04-02 01:07:07.975943 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-04-02 01:07:07.975949 | orchestrator | Thursday 02 April 2026 01:04:26 +0000 (0:00:00.125) 0:06:05.701 ******** 2026-04-02 01:07:07.975956 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:07:07.975962 | orchestrator | changed: [testbed-node-1] 2026-04-02 01:07:07.975969 | orchestrator | changed: [testbed-node-2] 2026-04-02 01:07:07.975978 | orchestrator | 2026-04-02 01:07:07.975985 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-04-02 01:07:07.975992 | orchestrator | Thursday 02 April 2026 01:04:33 +0000 (0:00:06.180) 0:06:11.881 ******** 2026-04-02 01:07:07.975998 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:07:07.976005 | orchestrator | changed: [testbed-node-1] 2026-04-02 01:07:07.976011 | orchestrator | changed: [testbed-node-2] 2026-04-02 01:07:07.976018 | orchestrator | 2026-04-02 01:07:07.976024 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-04-02 01:07:07.976031 | orchestrator | Thursday 02 April 2026 01:04:43 +0000 (0:00:10.918) 0:06:22.799 ******** 2026-04-02 01:07:07.976035 | orchestrator | changed: [testbed-node-3] 2026-04-02 01:07:07.976039 | orchestrator | changed: [testbed-node-4] 2026-04-02 01:07:07.976043 | orchestrator | changed: [testbed-node-5] 2026-04-02 01:07:07.976047 | orchestrator | 2026-04-02 01:07:07.976054 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-04-02 01:07:07.976058 | orchestrator | Thursday 02 April 2026 01:05:02 +0000 (0:00:18.223) 0:06:41.023 ******** 2026-04-02 01:07:07.976062 | orchestrator | changed: [testbed-node-4] 2026-04-02 01:07:07.976065 | orchestrator | changed: [testbed-node-3] 2026-04-02 01:07:07.976069 | orchestrator | changed: [testbed-node-5] 2026-04-02 01:07:07.976073 | orchestrator | 2026-04-02 01:07:07.976077 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-04-02 01:07:07.976081 | orchestrator | Thursday 02 April 2026 01:05:35 +0000 (0:00:33.789) 0:07:14.813 ******** 2026-04-02 01:07:07.976085 | orchestrator | changed: [testbed-node-3] 2026-04-02 01:07:07.976088 | orchestrator | changed: [testbed-node-4] 2026-04-02 01:07:07.976092 | orchestrator | changed: [testbed-node-5] 2026-04-02 01:07:07.976096 | orchestrator | 2026-04-02 01:07:07.976100 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-04-02 01:07:07.976107 | orchestrator | Thursday 02 April 2026 01:05:36 +0000 (0:00:00.830) 0:07:15.643 ******** 2026-04-02 01:07:07.976113 | orchestrator | changed: [testbed-node-3] 2026-04-02 01:07:07.976119 | orchestrator | changed: [testbed-node-4] 2026-04-02 01:07:07.976126 | orchestrator | changed: [testbed-node-5] 2026-04-02 01:07:07.976131 | orchestrator | 2026-04-02 01:07:07.976136 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-04-02 01:07:07.976142 | orchestrator | Thursday 02 April 2026 01:05:37 +0000 (0:00:00.742) 0:07:16.386 ******** 2026-04-02 01:07:07.976148 | orchestrator | changed: [testbed-node-3] 2026-04-02 01:07:07.976153 | orchestrator | changed: [testbed-node-4] 2026-04-02 01:07:07.976159 | orchestrator | changed: [testbed-node-5] 2026-04-02 01:07:07.976164 | orchestrator | 2026-04-02 01:07:07.976170 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-04-02 01:07:07.976176 | orchestrator | Thursday 02 April 2026 01:05:57 +0000 (0:00:19.505) 0:07:35.892 ******** 2026-04-02 01:07:07.976182 | orchestrator | skipping: [testbed-node-3] 2026-04-02 01:07:07.976187 | orchestrator | 2026-04-02 01:07:07.976193 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-04-02 01:07:07.976199 | orchestrator | Thursday 02 April 2026 01:05:57 +0000 (0:00:00.220) 0:07:36.112 ******** 2026-04-02 01:07:07.976205 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:07:07.976211 | orchestrator | skipping: [testbed-node-4] 2026-04-02 01:07:07.976216 | orchestrator | skipping: [testbed-node-3] 2026-04-02 01:07:07.976222 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:07:07.976228 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:07:07.976234 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2026-04-02 01:07:07.976241 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-02 01:07:07.976247 | orchestrator | 2026-04-02 01:07:07.976254 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-04-02 01:07:07.976260 | orchestrator | Thursday 02 April 2026 01:06:18 +0000 (0:00:20.895) 0:07:57.008 ******** 2026-04-02 01:07:07.976273 | orchestrator | skipping: [testbed-node-3] 2026-04-02 01:07:07.976279 | orchestrator | skipping: [testbed-node-4] 2026-04-02 01:07:07.976285 | orchestrator | skipping: [testbed-node-5] 2026-04-02 01:07:07.976292 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:07:07.976298 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:07:07.976331 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:07:07.976340 | orchestrator | 2026-04-02 01:07:07.976346 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-04-02 01:07:07.976353 | orchestrator | Thursday 02 April 2026 01:06:26 +0000 (0:00:08.310) 0:08:05.319 ******** 2026-04-02 01:07:07.976359 | orchestrator | skipping: [testbed-node-4] 2026-04-02 01:07:07.976364 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:07:07.976370 | orchestrator | skipping: [testbed-node-3] 2026-04-02 01:07:07.976377 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:07:07.976383 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:07:07.976390 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-5 2026-04-02 01:07:07.976397 | orchestrator | 2026-04-02 01:07:07.976405 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-04-02 01:07:07.976411 | orchestrator | Thursday 02 April 2026 01:06:29 +0000 (0:00:03.516) 0:08:08.835 ******** 2026-04-02 01:07:07.976416 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-02 01:07:07.976420 | orchestrator | 2026-04-02 01:07:07.976424 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-04-02 01:07:07.976428 | orchestrator | Thursday 02 April 2026 01:06:44 +0000 (0:00:14.574) 0:08:23.410 ******** 2026-04-02 01:07:07.976432 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-02 01:07:07.976435 | orchestrator | 2026-04-02 01:07:07.976439 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-04-02 01:07:07.976447 | orchestrator | Thursday 02 April 2026 01:06:45 +0000 (0:00:01.342) 0:08:24.753 ******** 2026-04-02 01:07:07.976451 | orchestrator | skipping: [testbed-node-5] 2026-04-02 01:07:07.976454 | orchestrator | 2026-04-02 01:07:07.976458 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-04-02 01:07:07.976462 | orchestrator | Thursday 02 April 2026 01:06:47 +0000 (0:00:01.237) 0:08:25.990 ******** 2026-04-02 01:07:07.976466 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-02 01:07:07.976472 | orchestrator | 2026-04-02 01:07:07.976478 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2026-04-02 01:07:07.976485 | orchestrator | Thursday 02 April 2026 01:06:59 +0000 (0:00:11.962) 0:08:37.953 ******** 2026-04-02 01:07:07.976495 | orchestrator | ok: [testbed-node-3] 2026-04-02 01:07:07.976501 | orchestrator | ok: [testbed-node-4] 2026-04-02 01:07:07.976507 | orchestrator | ok: [testbed-node-5] 2026-04-02 01:07:07.976513 | orchestrator | ok: [testbed-node-0] 2026-04-02 01:07:07.976520 | orchestrator | ok: [testbed-node-1] 2026-04-02 01:07:07.976526 | orchestrator | ok: [testbed-node-2] 2026-04-02 01:07:07.976532 | orchestrator | 2026-04-02 01:07:07.976539 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-04-02 01:07:07.976546 | orchestrator | 2026-04-02 01:07:07.976552 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-04-02 01:07:07.976566 | orchestrator | Thursday 02 April 2026 01:07:00 +0000 (0:00:01.718) 0:08:39.671 ******** 2026-04-02 01:07:07.976600 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:07:07.976605 | orchestrator | changed: [testbed-node-1] 2026-04-02 01:07:07.976609 | orchestrator | changed: [testbed-node-2] 2026-04-02 01:07:07.976615 | orchestrator | 2026-04-02 01:07:07.976625 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-04-02 01:07:07.976632 | orchestrator | 2026-04-02 01:07:07.976639 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-04-02 01:07:07.976645 | orchestrator | Thursday 02 April 2026 01:07:01 +0000 (0:00:01.113) 0:08:40.785 ******** 2026-04-02 01:07:07.976652 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:07:07.976664 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:07:07.976671 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:07:07.976679 | orchestrator | 2026-04-02 01:07:07.976686 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-04-02 01:07:07.976693 | orchestrator | 2026-04-02 01:07:07.976700 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-04-02 01:07:07.976707 | orchestrator | Thursday 02 April 2026 01:07:02 +0000 (0:00:00.469) 0:08:41.255 ******** 2026-04-02 01:07:07.976713 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-04-02 01:07:07.976717 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-04-02 01:07:07.976720 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-04-02 01:07:07.976724 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-04-02 01:07:07.976728 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-04-02 01:07:07.976732 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-04-02 01:07:07.976736 | orchestrator | skipping: [testbed-node-3] 2026-04-02 01:07:07.976740 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-04-02 01:07:07.976743 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-04-02 01:07:07.976747 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-04-02 01:07:07.976751 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-04-02 01:07:07.976755 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-04-02 01:07:07.976759 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-04-02 01:07:07.976763 | orchestrator | skipping: [testbed-node-4] 2026-04-02 01:07:07.976769 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-04-02 01:07:07.976779 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-04-02 01:07:07.976786 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-04-02 01:07:07.976793 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-04-02 01:07:07.976799 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-04-02 01:07:07.976805 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-04-02 01:07:07.976811 | orchestrator | skipping: [testbed-node-5] 2026-04-02 01:07:07.976817 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-04-02 01:07:07.976823 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-04-02 01:07:07.976829 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-04-02 01:07:07.976835 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-04-02 01:07:07.976841 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-04-02 01:07:07.976847 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-04-02 01:07:07.976853 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:07:07.976860 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-04-02 01:07:07.976866 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-04-02 01:07:07.976872 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-04-02 01:07:07.976878 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-04-02 01:07:07.976884 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-04-02 01:07:07.976891 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-04-02 01:07:07.976897 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:07:07.976904 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-04-02 01:07:07.976911 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-04-02 01:07:07.976917 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-04-02 01:07:07.976924 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-04-02 01:07:07.976946 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-04-02 01:07:07.976952 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-04-02 01:07:07.976958 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:07:07.976965 | orchestrator | 2026-04-02 01:07:07.976971 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-04-02 01:07:07.976977 | orchestrator | 2026-04-02 01:07:07.976984 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-04-02 01:07:07.976990 | orchestrator | Thursday 02 April 2026 01:07:03 +0000 (0:00:01.306) 0:08:42.562 ******** 2026-04-02 01:07:07.976997 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-04-02 01:07:07.977003 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-04-02 01:07:07.977010 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:07:07.977014 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-04-02 01:07:07.977018 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-04-02 01:07:07.977022 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:07:07.977026 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-04-02 01:07:07.977030 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-04-02 01:07:07.977033 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:07:07.977037 | orchestrator | 2026-04-02 01:07:07.977047 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-04-02 01:07:07.977052 | orchestrator | 2026-04-02 01:07:07.977056 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-04-02 01:07:07.977059 | orchestrator | Thursday 02 April 2026 01:07:04 +0000 (0:00:00.701) 0:08:43.263 ******** 2026-04-02 01:07:07.977063 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:07:07.977067 | orchestrator | 2026-04-02 01:07:07.977071 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-04-02 01:07:07.977075 | orchestrator | 2026-04-02 01:07:07.977079 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-04-02 01:07:07.977082 | orchestrator | Thursday 02 April 2026 01:07:05 +0000 (0:00:00.679) 0:08:43.942 ******** 2026-04-02 01:07:07.977086 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:07:07.977090 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:07:07.977094 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:07:07.977097 | orchestrator | 2026-04-02 01:07:07.977101 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-02 01:07:07.977105 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-02 01:07:07.977110 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=46  rescued=0 ignored=0 2026-04-02 01:07:07.977114 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=53  rescued=0 ignored=0 2026-04-02 01:07:07.977118 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=53  rescued=0 ignored=0 2026-04-02 01:07:07.977121 | orchestrator | testbed-node-3 : ok=41  changed=28  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-04-02 01:07:07.977125 | orchestrator | testbed-node-4 : ok=40  changed=28  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-04-02 01:07:07.977129 | orchestrator | testbed-node-5 : ok=45  changed=28  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-04-02 01:07:07.977133 | orchestrator | 2026-04-02 01:07:07.977137 | orchestrator | 2026-04-02 01:07:07.977140 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-02 01:07:07.977148 | orchestrator | Thursday 02 April 2026 01:07:05 +0000 (0:00:00.592) 0:08:44.534 ******** 2026-04-02 01:07:07.977151 | orchestrator | =============================================================================== 2026-04-02 01:07:07.977155 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 34.15s 2026-04-02 01:07:07.977159 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 33.79s 2026-04-02 01:07:07.977163 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 22.76s 2026-04-02 01:07:07.977166 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 20.90s 2026-04-02 01:07:07.977170 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 20.33s 2026-04-02 01:07:07.977174 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 19.76s 2026-04-02 01:07:07.977178 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 19.51s 2026-04-02 01:07:07.977182 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 18.22s 2026-04-02 01:07:07.977185 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 15.51s 2026-04-02 01:07:07.977189 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 14.57s 2026-04-02 01:07:07.977193 | orchestrator | nova-cell : Create cell ------------------------------------------------ 14.28s 2026-04-02 01:07:07.977197 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.51s 2026-04-02 01:07:07.977200 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.37s 2026-04-02 01:07:07.977204 | orchestrator | nova : Restart nova-api container -------------------------------------- 12.21s 2026-04-02 01:07:07.977210 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 11.96s 2026-04-02 01:07:07.977214 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 10.92s 2026-04-02 01:07:07.977218 | orchestrator | nova : Copying over nova.conf ------------------------------------------- 8.87s 2026-04-02 01:07:07.977222 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 8.53s 2026-04-02 01:07:07.977226 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 8.31s 2026-04-02 01:07:07.977229 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 8.14s 2026-04-02 01:07:07.977233 | orchestrator | 2026-04-02 01:07:07 | INFO  | Task 4801e223-5028-488b-8554-7e3faac10e43 is in state STARTED 2026-04-02 01:07:07.977237 | orchestrator | 2026-04-02 01:07:07 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:07:11.008682 | orchestrator | 2026-04-02 01:07:11 | INFO  | Task 4801e223-5028-488b-8554-7e3faac10e43 is in state STARTED 2026-04-02 01:07:11.008741 | orchestrator | 2026-04-02 01:07:11 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:07:14.059284 | orchestrator | 2026-04-02 01:07:14 | INFO  | Task 4801e223-5028-488b-8554-7e3faac10e43 is in state STARTED 2026-04-02 01:07:14.059347 | orchestrator | 2026-04-02 01:07:14 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:07:17.102008 | orchestrator | 2026-04-02 01:07:17 | INFO  | Task 4801e223-5028-488b-8554-7e3faac10e43 is in state STARTED 2026-04-02 01:07:17.102103 | orchestrator | 2026-04-02 01:07:17 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:07:20.135020 | orchestrator | 2026-04-02 01:07:20 | INFO  | Task 4801e223-5028-488b-8554-7e3faac10e43 is in state STARTED 2026-04-02 01:07:20.135080 | orchestrator | 2026-04-02 01:07:20 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:07:23.186130 | orchestrator | 2026-04-02 01:07:23 | INFO  | Task 4801e223-5028-488b-8554-7e3faac10e43 is in state STARTED 2026-04-02 01:07:23.186190 | orchestrator | 2026-04-02 01:07:23 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:07:26.234355 | orchestrator | 2026-04-02 01:07:26 | INFO  | Task 4801e223-5028-488b-8554-7e3faac10e43 is in state STARTED 2026-04-02 01:07:26.234426 | orchestrator | 2026-04-02 01:07:26 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:07:29.274466 | orchestrator | 2026-04-02 01:07:29 | INFO  | Task 4801e223-5028-488b-8554-7e3faac10e43 is in state STARTED 2026-04-02 01:07:29.274524 | orchestrator | 2026-04-02 01:07:29 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:07:32.316215 | orchestrator | 2026-04-02 01:07:32 | INFO  | Task 4801e223-5028-488b-8554-7e3faac10e43 is in state STARTED 2026-04-02 01:07:32.316291 | orchestrator | 2026-04-02 01:07:32 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:07:35.353090 | orchestrator | 2026-04-02 01:07:35 | INFO  | Task 4801e223-5028-488b-8554-7e3faac10e43 is in state STARTED 2026-04-02 01:07:35.353198 | orchestrator | 2026-04-02 01:07:35 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:07:38.397596 | orchestrator | 2026-04-02 01:07:38 | INFO  | Task 4801e223-5028-488b-8554-7e3faac10e43 is in state STARTED 2026-04-02 01:07:38.397685 | orchestrator | 2026-04-02 01:07:38 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:07:41.443112 | orchestrator | 2026-04-02 01:07:41 | INFO  | Task 4801e223-5028-488b-8554-7e3faac10e43 is in state STARTED 2026-04-02 01:07:41.443201 | orchestrator | 2026-04-02 01:07:41 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:07:44.486699 | orchestrator | 2026-04-02 01:07:44 | INFO  | Task 4801e223-5028-488b-8554-7e3faac10e43 is in state STARTED 2026-04-02 01:07:44.486788 | orchestrator | 2026-04-02 01:07:44 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:07:47.530441 | orchestrator | 2026-04-02 01:07:47 | INFO  | Task 4801e223-5028-488b-8554-7e3faac10e43 is in state STARTED 2026-04-02 01:07:47.530523 | orchestrator | 2026-04-02 01:07:47 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:07:50.563369 | orchestrator | 2026-04-02 01:07:50 | INFO  | Task 4801e223-5028-488b-8554-7e3faac10e43 is in state STARTED 2026-04-02 01:07:50.563422 | orchestrator | 2026-04-02 01:07:50 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:07:53.606983 | orchestrator | 2026-04-02 01:07:53 | INFO  | Task 4801e223-5028-488b-8554-7e3faac10e43 is in state STARTED 2026-04-02 01:07:53.607073 | orchestrator | 2026-04-02 01:07:53 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:07:56.640282 | orchestrator | 2026-04-02 01:07:56 | INFO  | Task 4801e223-5028-488b-8554-7e3faac10e43 is in state STARTED 2026-04-02 01:07:56.640342 | orchestrator | 2026-04-02 01:07:56 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:07:59.677366 | orchestrator | 2026-04-02 01:07:59 | INFO  | Task 4801e223-5028-488b-8554-7e3faac10e43 is in state STARTED 2026-04-02 01:07:59.677453 | orchestrator | 2026-04-02 01:07:59 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:08:02.735368 | orchestrator | 2026-04-02 01:08:02 | INFO  | Task 4801e223-5028-488b-8554-7e3faac10e43 is in state STARTED 2026-04-02 01:08:02.735444 | orchestrator | 2026-04-02 01:08:02 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:08:05.770965 | orchestrator | 2026-04-02 01:08:05 | INFO  | Task 4801e223-5028-488b-8554-7e3faac10e43 is in state STARTED 2026-04-02 01:08:05.771046 | orchestrator | 2026-04-02 01:08:05 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:08:08.803932 | orchestrator | 2026-04-02 01:08:08 | INFO  | Task 4801e223-5028-488b-8554-7e3faac10e43 is in state STARTED 2026-04-02 01:08:08.804018 | orchestrator | 2026-04-02 01:08:08 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:08:11.841922 | orchestrator | 2026-04-02 01:08:11 | INFO  | Task 4801e223-5028-488b-8554-7e3faac10e43 is in state STARTED 2026-04-02 01:08:11.842054 | orchestrator | 2026-04-02 01:08:11 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:08:14.883165 | orchestrator | 2026-04-02 01:08:14 | INFO  | Task 4801e223-5028-488b-8554-7e3faac10e43 is in state STARTED 2026-04-02 01:08:14.883225 | orchestrator | 2026-04-02 01:08:14 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:08:17.927845 | orchestrator | 2026-04-02 01:08:17 | INFO  | Task 4801e223-5028-488b-8554-7e3faac10e43 is in state STARTED 2026-04-02 01:08:17.927936 | orchestrator | 2026-04-02 01:08:17 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:08:20.966118 | orchestrator | 2026-04-02 01:08:20 | INFO  | Task 4801e223-5028-488b-8554-7e3faac10e43 is in state STARTED 2026-04-02 01:08:20.966353 | orchestrator | 2026-04-02 01:08:20 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:08:24.007386 | orchestrator | 2026-04-02 01:08:24 | INFO  | Task 4801e223-5028-488b-8554-7e3faac10e43 is in state STARTED 2026-04-02 01:08:24.007460 | orchestrator | 2026-04-02 01:08:24 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:08:27.055060 | orchestrator | 2026-04-02 01:08:27 | INFO  | Task 4801e223-5028-488b-8554-7e3faac10e43 is in state STARTED 2026-04-02 01:08:27.055147 | orchestrator | 2026-04-02 01:08:27 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:08:30.092824 | orchestrator | 2026-04-02 01:08:30 | INFO  | Task 4801e223-5028-488b-8554-7e3faac10e43 is in state STARTED 2026-04-02 01:08:30.092876 | orchestrator | 2026-04-02 01:08:30 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:08:33.136078 | orchestrator | 2026-04-02 01:08:33 | INFO  | Task 4801e223-5028-488b-8554-7e3faac10e43 is in state STARTED 2026-04-02 01:08:33.136257 | orchestrator | 2026-04-02 01:08:33 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:08:36.185476 | orchestrator | 2026-04-02 01:08:36 | INFO  | Task 4801e223-5028-488b-8554-7e3faac10e43 is in state STARTED 2026-04-02 01:08:36.185552 | orchestrator | 2026-04-02 01:08:36 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:08:39.241343 | orchestrator | 2026-04-02 01:08:39 | INFO  | Task 4801e223-5028-488b-8554-7e3faac10e43 is in state STARTED 2026-04-02 01:08:39.241444 | orchestrator | 2026-04-02 01:08:39 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:08:42.288628 | orchestrator | 2026-04-02 01:08:42 | INFO  | Task 4801e223-5028-488b-8554-7e3faac10e43 is in state STARTED 2026-04-02 01:08:42.288689 | orchestrator | 2026-04-02 01:08:42 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:08:45.338689 | orchestrator | 2026-04-02 01:08:45 | INFO  | Task 4801e223-5028-488b-8554-7e3faac10e43 is in state STARTED 2026-04-02 01:08:45.338737 | orchestrator | 2026-04-02 01:08:45 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:08:48.385696 | orchestrator | 2026-04-02 01:08:48 | INFO  | Task 4801e223-5028-488b-8554-7e3faac10e43 is in state STARTED 2026-04-02 01:08:48.385770 | orchestrator | 2026-04-02 01:08:48 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:08:51.433659 | orchestrator | 2026-04-02 01:08:51 | INFO  | Task 4801e223-5028-488b-8554-7e3faac10e43 is in state STARTED 2026-04-02 01:08:51.433715 | orchestrator | 2026-04-02 01:08:51 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:08:54.477658 | orchestrator | 2026-04-02 01:08:54 | INFO  | Task 4801e223-5028-488b-8554-7e3faac10e43 is in state STARTED 2026-04-02 01:08:54.477759 | orchestrator | 2026-04-02 01:08:54 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:08:57.515632 | orchestrator | 2026-04-02 01:08:57 | INFO  | Task 4801e223-5028-488b-8554-7e3faac10e43 is in state STARTED 2026-04-02 01:08:57.515753 | orchestrator | 2026-04-02 01:08:57 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:09:00.557887 | orchestrator | 2026-04-02 01:09:00 | INFO  | Task 4801e223-5028-488b-8554-7e3faac10e43 is in state STARTED 2026-04-02 01:09:00.557946 | orchestrator | 2026-04-02 01:09:00 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:09:03.601640 | orchestrator | 2026-04-02 01:09:03 | INFO  | Task 4801e223-5028-488b-8554-7e3faac10e43 is in state STARTED 2026-04-02 01:09:03.601748 | orchestrator | 2026-04-02 01:09:03 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:09:06.649800 | orchestrator | 2026-04-02 01:09:06 | INFO  | Task 4801e223-5028-488b-8554-7e3faac10e43 is in state STARTED 2026-04-02 01:09:06.649888 | orchestrator | 2026-04-02 01:09:06 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:09:09.692951 | orchestrator | 2026-04-02 01:09:09 | INFO  | Task 4801e223-5028-488b-8554-7e3faac10e43 is in state STARTED 2026-04-02 01:09:09.693064 | orchestrator | 2026-04-02 01:09:09 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:09:12.732613 | orchestrator | 2026-04-02 01:09:12 | INFO  | Task 4801e223-5028-488b-8554-7e3faac10e43 is in state STARTED 2026-04-02 01:09:12.732670 | orchestrator | 2026-04-02 01:09:12 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:09:15.779530 | orchestrator | 2026-04-02 01:09:15 | INFO  | Task 4801e223-5028-488b-8554-7e3faac10e43 is in state STARTED 2026-04-02 01:09:15.779601 | orchestrator | 2026-04-02 01:09:15 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:09:18.820217 | orchestrator | 2026-04-02 01:09:18 | INFO  | Task 4801e223-5028-488b-8554-7e3faac10e43 is in state STARTED 2026-04-02 01:09:18.820275 | orchestrator | 2026-04-02 01:09:18 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:09:21.866491 | orchestrator | 2026-04-02 01:09:21 | INFO  | Task 4801e223-5028-488b-8554-7e3faac10e43 is in state STARTED 2026-04-02 01:09:21.866550 | orchestrator | 2026-04-02 01:09:21 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:09:24.905642 | orchestrator | 2026-04-02 01:09:24 | INFO  | Task 4801e223-5028-488b-8554-7e3faac10e43 is in state STARTED 2026-04-02 01:09:24.905723 | orchestrator | 2026-04-02 01:09:24 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:09:27.945759 | orchestrator | 2026-04-02 01:09:27 | INFO  | Task 4801e223-5028-488b-8554-7e3faac10e43 is in state STARTED 2026-04-02 01:09:27.945848 | orchestrator | 2026-04-02 01:09:27 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:09:30.986744 | orchestrator | 2026-04-02 01:09:30 | INFO  | Task 4801e223-5028-488b-8554-7e3faac10e43 is in state STARTED 2026-04-02 01:09:30.986839 | orchestrator | 2026-04-02 01:09:30 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:09:34.023460 | orchestrator | 2026-04-02 01:09:34 | INFO  | Task 4801e223-5028-488b-8554-7e3faac10e43 is in state STARTED 2026-04-02 01:09:34.023546 | orchestrator | 2026-04-02 01:09:34 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:09:37.060820 | orchestrator | 2026-04-02 01:09:37 | INFO  | Task 4801e223-5028-488b-8554-7e3faac10e43 is in state STARTED 2026-04-02 01:09:37.060877 | orchestrator | 2026-04-02 01:09:37 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:09:40.103324 | orchestrator | 2026-04-02 01:09:40 | INFO  | Task 4801e223-5028-488b-8554-7e3faac10e43 is in state STARTED 2026-04-02 01:09:40.103385 | orchestrator | 2026-04-02 01:09:40 | INFO  | Wait 1 second(s) until the next check 2026-04-02 01:09:43.145310 | orchestrator | 2026-04-02 01:09:43 | INFO  | Task 4801e223-5028-488b-8554-7e3faac10e43 is in state SUCCESS 2026-04-02 01:09:43.146001 | orchestrator | 2026-04-02 01:09:43.146164 | orchestrator | 2026-04-02 01:09:43.146175 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-02 01:09:43.146183 | orchestrator | 2026-04-02 01:09:43.146190 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-02 01:09:43.146198 | orchestrator | Thursday 02 April 2026 01:05:05 +0000 (0:00:00.316) 0:00:00.316 ******** 2026-04-02 01:09:43.146205 | orchestrator | ok: [testbed-node-0] 2026-04-02 01:09:43.146213 | orchestrator | ok: [testbed-node-1] 2026-04-02 01:09:43.146220 | orchestrator | ok: [testbed-node-2] 2026-04-02 01:09:43.146227 | orchestrator | 2026-04-02 01:09:43.146233 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-02 01:09:43.146240 | orchestrator | Thursday 02 April 2026 01:05:06 +0000 (0:00:00.256) 0:00:00.572 ******** 2026-04-02 01:09:43.146246 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-04-02 01:09:43.146254 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-04-02 01:09:43.146260 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-04-02 01:09:43.146266 | orchestrator | 2026-04-02 01:09:43.146377 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-04-02 01:09:43.146388 | orchestrator | 2026-04-02 01:09:43.146395 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-02 01:09:43.146402 | orchestrator | Thursday 02 April 2026 01:05:06 +0000 (0:00:00.361) 0:00:00.934 ******** 2026-04-02 01:09:43.146409 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 01:09:43.146417 | orchestrator | 2026-04-02 01:09:43.146425 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2026-04-02 01:09:43.146432 | orchestrator | Thursday 02 April 2026 01:05:07 +0000 (0:00:00.784) 0:00:01.719 ******** 2026-04-02 01:09:43.146439 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2026-04-02 01:09:43.146446 | orchestrator | 2026-04-02 01:09:43.146453 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2026-04-02 01:09:43.146460 | orchestrator | Thursday 02 April 2026 01:05:11 +0000 (0:00:03.741) 0:00:05.460 ******** 2026-04-02 01:09:43.146467 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2026-04-02 01:09:43.146474 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2026-04-02 01:09:43.146667 | orchestrator | 2026-04-02 01:09:43.146678 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2026-04-02 01:09:43.146685 | orchestrator | Thursday 02 April 2026 01:05:16 +0000 (0:00:05.900) 0:00:11.361 ******** 2026-04-02 01:09:43.146692 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-02 01:09:43.146700 | orchestrator | 2026-04-02 01:09:43.146706 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2026-04-02 01:09:43.146713 | orchestrator | Thursday 02 April 2026 01:05:19 +0000 (0:00:02.613) 0:00:13.974 ******** 2026-04-02 01:09:43.146720 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-04-02 01:09:43.146727 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-04-02 01:09:43.146734 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-02 01:09:43.146741 | orchestrator | 2026-04-02 01:09:43.146748 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2026-04-02 01:09:43.146756 | orchestrator | Thursday 02 April 2026 01:05:26 +0000 (0:00:07.152) 0:00:21.127 ******** 2026-04-02 01:09:43.146762 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-02 01:09:43.146768 | orchestrator | 2026-04-02 01:09:43.146776 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2026-04-02 01:09:43.146782 | orchestrator | Thursday 02 April 2026 01:05:29 +0000 (0:00:03.182) 0:00:24.309 ******** 2026-04-02 01:09:43.146809 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2026-04-02 01:09:43.146816 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2026-04-02 01:09:43.146823 | orchestrator | 2026-04-02 01:09:43.146830 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2026-04-02 01:09:43.146837 | orchestrator | Thursday 02 April 2026 01:05:37 +0000 (0:00:07.824) 0:00:32.134 ******** 2026-04-02 01:09:43.146843 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2026-04-02 01:09:43.146850 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2026-04-02 01:09:43.146857 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2026-04-02 01:09:43.146864 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2026-04-02 01:09:43.146871 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2026-04-02 01:09:43.146877 | orchestrator | 2026-04-02 01:09:43.146884 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-02 01:09:43.146891 | orchestrator | Thursday 02 April 2026 01:05:53 +0000 (0:00:15.610) 0:00:47.744 ******** 2026-04-02 01:09:43.146898 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 01:09:43.146904 | orchestrator | 2026-04-02 01:09:43.146911 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2026-04-02 01:09:43.146918 | orchestrator | Thursday 02 April 2026 01:05:54 +0000 (0:00:00.708) 0:00:48.453 ******** 2026-04-02 01:09:43.147018 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:09:43.147025 | orchestrator | 2026-04-02 01:09:43.147032 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2026-04-02 01:09:43.147039 | orchestrator | Thursday 02 April 2026 01:05:58 +0000 (0:00:04.503) 0:00:52.957 ******** 2026-04-02 01:09:43.147046 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:09:43.147373 | orchestrator | 2026-04-02 01:09:43.147394 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-04-02 01:09:43.147426 | orchestrator | Thursday 02 April 2026 01:06:02 +0000 (0:00:03.705) 0:00:56.663 ******** 2026-04-02 01:09:43.147434 | orchestrator | ok: [testbed-node-0] 2026-04-02 01:09:43.147441 | orchestrator | 2026-04-02 01:09:43.147449 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2026-04-02 01:09:43.147456 | orchestrator | Thursday 02 April 2026 01:06:05 +0000 (0:00:02.904) 0:00:59.568 ******** 2026-04-02 01:09:43.147463 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-04-02 01:09:43.147470 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-04-02 01:09:43.147477 | orchestrator | 2026-04-02 01:09:43.147483 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2026-04-02 01:09:43.147490 | orchestrator | Thursday 02 April 2026 01:06:16 +0000 (0:00:10.910) 0:01:10.478 ******** 2026-04-02 01:09:43.147497 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2026-04-02 01:09:43.147513 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2026-04-02 01:09:43.147520 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2026-04-02 01:09:43.147527 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2026-04-02 01:09:43.147533 | orchestrator | 2026-04-02 01:09:43.147540 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2026-04-02 01:09:43.147547 | orchestrator | Thursday 02 April 2026 01:06:31 +0000 (0:00:15.546) 0:01:26.025 ******** 2026-04-02 01:09:43.147554 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:09:43.147561 | orchestrator | 2026-04-02 01:09:43.147567 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2026-04-02 01:09:43.147583 | orchestrator | Thursday 02 April 2026 01:06:36 +0000 (0:00:04.521) 0:01:30.547 ******** 2026-04-02 01:09:43.147590 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:09:43.147596 | orchestrator | 2026-04-02 01:09:43.147603 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2026-04-02 01:09:43.147609 | orchestrator | Thursday 02 April 2026 01:06:41 +0000 (0:00:05.512) 0:01:36.059 ******** 2026-04-02 01:09:43.147616 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:09:43.147622 | orchestrator | 2026-04-02 01:09:43.147629 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2026-04-02 01:09:43.147635 | orchestrator | Thursday 02 April 2026 01:06:41 +0000 (0:00:00.202) 0:01:36.261 ******** 2026-04-02 01:09:43.147642 | orchestrator | ok: [testbed-node-0] 2026-04-02 01:09:43.147648 | orchestrator | 2026-04-02 01:09:43.147655 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-02 01:09:43.147661 | orchestrator | Thursday 02 April 2026 01:06:46 +0000 (0:00:04.443) 0:01:40.704 ******** 2026-04-02 01:09:43.147668 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 01:09:43.147674 | orchestrator | 2026-04-02 01:09:43.147682 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2026-04-02 01:09:43.147688 | orchestrator | Thursday 02 April 2026 01:06:47 +0000 (0:00:00.757) 0:01:41.462 ******** 2026-04-02 01:09:43.147713 | orchestrator | changed: [testbed-node-1] 2026-04-02 01:09:43.147720 | orchestrator | changed: [testbed-node-2] 2026-04-02 01:09:43.147727 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:09:43.147733 | orchestrator | 2026-04-02 01:09:43.147741 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2026-04-02 01:09:43.147748 | orchestrator | Thursday 02 April 2026 01:06:53 +0000 (0:00:06.875) 0:01:48.337 ******** 2026-04-02 01:09:43.147756 | orchestrator | changed: [testbed-node-1] 2026-04-02 01:09:43.147763 | orchestrator | changed: [testbed-node-2] 2026-04-02 01:09:43.147770 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:09:43.147777 | orchestrator | 2026-04-02 01:09:43.147785 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2026-04-02 01:09:43.147792 | orchestrator | Thursday 02 April 2026 01:06:58 +0000 (0:00:04.624) 0:01:52.962 ******** 2026-04-02 01:09:43.147799 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:09:43.147807 | orchestrator | changed: [testbed-node-1] 2026-04-02 01:09:43.147814 | orchestrator | changed: [testbed-node-2] 2026-04-02 01:09:43.147821 | orchestrator | 2026-04-02 01:09:43.147829 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2026-04-02 01:09:43.147836 | orchestrator | Thursday 02 April 2026 01:06:59 +0000 (0:00:00.640) 0:01:53.603 ******** 2026-04-02 01:09:43.147843 | orchestrator | ok: [testbed-node-1] 2026-04-02 01:09:43.147850 | orchestrator | ok: [testbed-node-0] 2026-04-02 01:09:43.147857 | orchestrator | ok: [testbed-node-2] 2026-04-02 01:09:43.147865 | orchestrator | 2026-04-02 01:09:43.147872 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2026-04-02 01:09:43.147879 | orchestrator | Thursday 02 April 2026 01:07:00 +0000 (0:00:01.664) 0:01:55.267 ******** 2026-04-02 01:09:43.147886 | orchestrator | changed: [testbed-node-1] 2026-04-02 01:09:43.147894 | orchestrator | changed: [testbed-node-2] 2026-04-02 01:09:43.147901 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:09:43.147908 | orchestrator | 2026-04-02 01:09:43.147915 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2026-04-02 01:09:43.147922 | orchestrator | Thursday 02 April 2026 01:07:02 +0000 (0:00:01.308) 0:01:56.576 ******** 2026-04-02 01:09:43.147929 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:09:43.147936 | orchestrator | changed: [testbed-node-1] 2026-04-02 01:09:43.147943 | orchestrator | changed: [testbed-node-2] 2026-04-02 01:09:43.147950 | orchestrator | 2026-04-02 01:09:43.147957 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2026-04-02 01:09:43.147964 | orchestrator | Thursday 02 April 2026 01:07:03 +0000 (0:00:01.239) 0:01:57.815 ******** 2026-04-02 01:09:43.147978 | orchestrator | changed: [testbed-node-2] 2026-04-02 01:09:43.147986 | orchestrator | changed: [testbed-node-1] 2026-04-02 01:09:43.147994 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:09:43.148002 | orchestrator | 2026-04-02 01:09:43.148036 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2026-04-02 01:09:43.148046 | orchestrator | Thursday 02 April 2026 01:07:05 +0000 (0:00:02.548) 0:02:00.364 ******** 2026-04-02 01:09:43.148076 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:09:43.148087 | orchestrator | changed: [testbed-node-1] 2026-04-02 01:09:43.148093 | orchestrator | changed: [testbed-node-2] 2026-04-02 01:09:43.148103 | orchestrator | 2026-04-02 01:09:43.148111 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2026-04-02 01:09:43.148118 | orchestrator | Thursday 02 April 2026 01:07:07 +0000 (0:00:02.017) 0:02:02.381 ******** 2026-04-02 01:09:43.148126 | orchestrator | ok: [testbed-node-0] 2026-04-02 01:09:43.148134 | orchestrator | ok: [testbed-node-1] 2026-04-02 01:09:43.148142 | orchestrator | ok: [testbed-node-2] 2026-04-02 01:09:43.148149 | orchestrator | 2026-04-02 01:09:43.148156 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2026-04-02 01:09:43.148165 | orchestrator | Thursday 02 April 2026 01:07:08 +0000 (0:00:00.566) 0:02:02.947 ******** 2026-04-02 01:09:43.148173 | orchestrator | ok: [testbed-node-1] 2026-04-02 01:09:43.148186 | orchestrator | ok: [testbed-node-2] 2026-04-02 01:09:43.148194 | orchestrator | ok: [testbed-node-0] 2026-04-02 01:09:43.148201 | orchestrator | 2026-04-02 01:09:43.148263 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-02 01:09:43.148272 | orchestrator | Thursday 02 April 2026 01:07:11 +0000 (0:00:03.236) 0:02:06.184 ******** 2026-04-02 01:09:43.148280 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 01:09:43.148288 | orchestrator | 2026-04-02 01:09:43.148295 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-04-02 01:09:43.148303 | orchestrator | Thursday 02 April 2026 01:07:12 +0000 (0:00:00.674) 0:02:06.859 ******** 2026-04-02 01:09:43.148310 | orchestrator | ok: [testbed-node-0] 2026-04-02 01:09:43.148318 | orchestrator | 2026-04-02 01:09:43.148325 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-04-02 01:09:43.148331 | orchestrator | Thursday 02 April 2026 01:07:15 +0000 (0:00:03.532) 0:02:10.392 ******** 2026-04-02 01:09:43.148338 | orchestrator | ok: [testbed-node-0] 2026-04-02 01:09:43.148345 | orchestrator | 2026-04-02 01:09:43.148352 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-04-02 01:09:43.148358 | orchestrator | Thursday 02 April 2026 01:07:19 +0000 (0:00:03.101) 0:02:13.493 ******** 2026-04-02 01:09:43.148365 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-04-02 01:09:43.148372 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-04-02 01:09:43.148378 | orchestrator | 2026-04-02 01:09:43.148383 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-04-02 01:09:43.148389 | orchestrator | Thursday 02 April 2026 01:07:25 +0000 (0:00:06.464) 0:02:19.958 ******** 2026-04-02 01:09:43.148394 | orchestrator | ok: [testbed-node-0] 2026-04-02 01:09:43.148400 | orchestrator | 2026-04-02 01:09:43.148406 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-04-02 01:09:43.148411 | orchestrator | Thursday 02 April 2026 01:07:28 +0000 (0:00:03.341) 0:02:23.299 ******** 2026-04-02 01:09:43.148417 | orchestrator | ok: [testbed-node-0] 2026-04-02 01:09:43.148422 | orchestrator | ok: [testbed-node-1] 2026-04-02 01:09:43.148428 | orchestrator | ok: [testbed-node-2] 2026-04-02 01:09:43.148433 | orchestrator | 2026-04-02 01:09:43.148439 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-04-02 01:09:43.148444 | orchestrator | Thursday 02 April 2026 01:07:29 +0000 (0:00:00.264) 0:02:23.564 ******** 2026-04-02 01:09:43.148453 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-02 01:09:43.148504 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-02 01:09:43.148519 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-02 01:09:43.148527 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-02 01:09:43.148535 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-02 01:09:43.148542 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-02 01:09:43.148555 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-02 01:09:43.148564 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-02 01:09:43.148590 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-02 01:09:43.148601 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-02 01:09:43.148609 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-02 01:09:43.148617 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-02 01:09:43.148628 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-02 01:09:43.148635 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-02 01:09:43.148643 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-02 01:09:43.148650 | orchestrator | 2026-04-02 01:09:43.148656 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-04-02 01:09:43.148663 | orchestrator | Thursday 02 April 2026 01:07:31 +0000 (0:00:02.526) 0:02:26.090 ******** 2026-04-02 01:09:43.148670 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:09:43.148676 | orchestrator | 2026-04-02 01:09:43.148700 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-04-02 01:09:43.148707 | orchestrator | Thursday 02 April 2026 01:07:31 +0000 (0:00:00.113) 0:02:26.204 ******** 2026-04-02 01:09:43.148715 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:09:43.148721 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:09:43.148728 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:09:43.148735 | orchestrator | 2026-04-02 01:09:43.148742 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-04-02 01:09:43.148749 | orchestrator | Thursday 02 April 2026 01:07:32 +0000 (0:00:00.239) 0:02:26.444 ******** 2026-04-02 01:09:43.148761 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-02 01:09:43.148768 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-02 01:09:43.148781 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-02 01:09:43.148787 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-02 01:09:43.148795 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-02 01:09:43.148802 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:09:43.148828 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-02 01:09:43.148845 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-02 01:09:43.148852 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-02 01:09:43.148864 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-02 01:09:43.148871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-02 01:09:43.148878 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:09:43.148885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-02 01:09:43.148913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-02 01:09:43.148924 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-02 01:09:43.148931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-02 01:09:43.148944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-02 01:09:43.148950 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:09:43.148961 | orchestrator | 2026-04-02 01:09:43.148967 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-02 01:09:43.148974 | orchestrator | Thursday 02 April 2026 01:07:32 +0000 (0:00:00.629) 0:02:27.073 ******** 2026-04-02 01:09:43.148981 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 01:09:43.148987 | orchestrator | 2026-04-02 01:09:43.148994 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-04-02 01:09:43.149001 | orchestrator | Thursday 02 April 2026 01:07:33 +0000 (0:00:00.713) 0:02:27.787 ******** 2026-04-02 01:09:43.149008 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-02 01:09:43.149033 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-02 01:09:43.149044 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-02 01:09:43.149111 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-02 01:09:43.149121 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-02 01:09:43.149129 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-02 01:09:43.149137 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-02 01:09:43.149145 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-02 01:09:43.149156 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-02 01:09:43.149168 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-02 01:09:43.149183 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-02 01:09:43.149191 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-02 01:09:43.149199 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-02 01:09:43.149207 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-02 01:09:43.149222 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-02 01:09:43.149230 | orchestrator | 2026-04-02 01:09:43.149237 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-04-02 01:09:43.149244 | orchestrator | Thursday 02 April 2026 01:07:38 +0000 (0:00:05.409) 0:02:33.197 ******** 2026-04-02 01:09:43.149255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-02 01:09:43.149269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-02 01:09:43.149276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-02 01:09:43.149284 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-02 01:09:43.149292 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-02 01:09:43.149299 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:09:43.149312 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-02 01:09:43.149328 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-02 01:09:43.149336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-02 01:09:43.149344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-02 01:09:43.149352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-02 01:09:43.149359 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:09:43.149367 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-02 01:09:43.149374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-02 01:09:43.149391 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-02 01:09:43.149402 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-02 01:09:43.149410 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-02 01:09:43.149417 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:09:43.149424 | orchestrator | 2026-04-02 01:09:43.149431 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-04-02 01:09:43.149439 | orchestrator | Thursday 02 April 2026 01:07:39 +0000 (0:00:00.660) 0:02:33.858 ******** 2026-04-02 01:09:43.149446 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-02 01:09:43.149454 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-02 01:09:43.149461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-02 01:09:43.149477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-02 01:09:43.149491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-02 01:09:43.149499 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:09:43.149506 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-02 01:09:43.149513 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-02 01:09:43.149521 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-02 01:09:43.149527 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-02 01:09:43.149544 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-02 01:09:43.149551 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:09:43.149562 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-02 01:09:43.149570 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-02 01:09:43.149577 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-02 01:09:43.149583 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-02 01:09:43.149590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-02 01:09:43.149604 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:09:43.149610 | orchestrator | 2026-04-02 01:09:43.149616 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-04-02 01:09:43.149624 | orchestrator | Thursday 02 April 2026 01:07:40 +0000 (0:00:00.978) 0:02:34.836 ******** 2026-04-02 01:09:43.149636 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-02 01:09:43.149647 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-02 01:09:43.149656 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-02 01:09:43.149664 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-02 01:09:43.149672 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-02 01:09:43.149685 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-02 01:09:43.149697 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-02 01:09:43.149709 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-02 01:09:43.149717 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-02 01:09:43.149725 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-02 01:09:43.149733 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-02 01:09:43.149746 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-02 01:09:43.149758 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-02 01:09:43.149769 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-02 01:09:43.149777 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-02 01:09:43.149785 | orchestrator | 2026-04-02 01:09:43.149792 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-04-02 01:09:43.149800 | orchestrator | Thursday 02 April 2026 01:07:45 +0000 (0:00:05.200) 0:02:40.037 ******** 2026-04-02 01:09:43.149807 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-04-02 01:09:43.149817 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-04-02 01:09:43.149824 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-04-02 01:09:43.149831 | orchestrator | 2026-04-02 01:09:43.149838 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-04-02 01:09:43.149845 | orchestrator | Thursday 02 April 2026 01:07:47 +0000 (0:00:01.796) 0:02:41.833 ******** 2026-04-02 01:09:43.149853 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-02 01:09:43.149866 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-02 01:09:43.149879 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-02 01:09:43.149890 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-02 01:09:43.149898 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-02 01:09:43.149905 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-02 01:09:43.149913 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-02 01:09:43.149925 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-02 01:09:43.149933 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-02 01:09:43.149943 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-02 01:09:43.149951 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-02 01:09:43.149959 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-02 01:09:43.149966 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-02 01:09:43.150119 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-02 01:09:43.150138 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-02 01:09:43.150145 | orchestrator | 2026-04-02 01:09:43.150153 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-04-02 01:09:43.150160 | orchestrator | Thursday 02 April 2026 01:08:03 +0000 (0:00:16.387) 0:02:58.221 ******** 2026-04-02 01:09:43.150168 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:09:43.150175 | orchestrator | changed: [testbed-node-1] 2026-04-02 01:09:43.150182 | orchestrator | changed: [testbed-node-2] 2026-04-02 01:09:43.150190 | orchestrator | 2026-04-02 01:09:43.150197 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-04-02 01:09:43.150204 | orchestrator | Thursday 02 April 2026 01:08:05 +0000 (0:00:02.063) 0:03:00.285 ******** 2026-04-02 01:09:43.150211 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-04-02 01:09:43.150219 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-04-02 01:09:43.150232 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-04-02 01:09:43.150240 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-04-02 01:09:43.150247 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-04-02 01:09:43.150254 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-04-02 01:09:43.150261 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-04-02 01:09:43.150269 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-04-02 01:09:43.150276 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-04-02 01:09:43.150283 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-04-02 01:09:43.150290 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-04-02 01:09:43.150297 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-04-02 01:09:43.150304 | orchestrator | 2026-04-02 01:09:43.150314 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-04-02 01:09:43.150321 | orchestrator | Thursday 02 April 2026 01:08:11 +0000 (0:00:05.156) 0:03:05.441 ******** 2026-04-02 01:09:43.150326 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-04-02 01:09:43.150332 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-04-02 01:09:43.150338 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-04-02 01:09:43.150344 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-04-02 01:09:43.150350 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-04-02 01:09:43.150359 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-04-02 01:09:43.150375 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-04-02 01:09:43.150382 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-04-02 01:09:43.150388 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-04-02 01:09:43.150394 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-04-02 01:09:43.150400 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-04-02 01:09:43.150406 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-04-02 01:09:43.150412 | orchestrator | 2026-04-02 01:09:43.150420 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-04-02 01:09:43.150427 | orchestrator | Thursday 02 April 2026 01:08:15 +0000 (0:00:04.614) 0:03:10.056 ******** 2026-04-02 01:09:43.150434 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-04-02 01:09:43.150441 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-04-02 01:09:43.150448 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-04-02 01:09:43.150455 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-04-02 01:09:43.150462 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-04-02 01:09:43.150470 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-04-02 01:09:43.150477 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-04-02 01:09:43.150484 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-04-02 01:09:43.150490 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-04-02 01:09:43.150497 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-04-02 01:09:43.150504 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-04-02 01:09:43.150512 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-04-02 01:09:43.150518 | orchestrator | 2026-04-02 01:09:43.150525 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2026-04-02 01:09:43.150531 | orchestrator | Thursday 02 April 2026 01:08:21 +0000 (0:00:05.658) 0:03:15.715 ******** 2026-04-02 01:09:43.150538 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-02 01:09:43.150552 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-02 01:09:43.150569 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-02 01:09:43.150577 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-02 01:09:43.150585 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-02 01:09:43.150593 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-02 01:09:43.150600 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-02 01:09:43.150612 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-02 01:09:43.150620 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-02 01:09:43.150637 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-02 01:09:43.150645 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-02 01:09:43.150652 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-02 01:09:43.150660 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-02 01:09:43.150668 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-02 01:09:43.150681 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-02 01:09:43.150695 | orchestrator | 2026-04-02 01:09:43.150702 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-02 01:09:43.150710 | orchestrator | Thursday 02 April 2026 01:08:24 +0000 (0:00:03.634) 0:03:19.349 ******** 2026-04-02 01:09:43.150717 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:09:43.150724 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:09:43.150731 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:09:43.150739 | orchestrator | 2026-04-02 01:09:43.150745 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-04-02 01:09:43.150751 | orchestrator | Thursday 02 April 2026 01:08:25 +0000 (0:00:00.456) 0:03:19.805 ******** 2026-04-02 01:09:43.150758 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:09:43.150765 | orchestrator | 2026-04-02 01:09:43.150779 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-04-02 01:09:43.150785 | orchestrator | Thursday 02 April 2026 01:08:27 +0000 (0:00:02.507) 0:03:22.313 ******** 2026-04-02 01:09:43.150792 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:09:43.150798 | orchestrator | 2026-04-02 01:09:43.150805 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-04-02 01:09:43.150812 | orchestrator | Thursday 02 April 2026 01:08:30 +0000 (0:00:02.382) 0:03:24.695 ******** 2026-04-02 01:09:43.150819 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:09:43.150827 | orchestrator | 2026-04-02 01:09:43.150834 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-04-02 01:09:43.150841 | orchestrator | Thursday 02 April 2026 01:08:32 +0000 (0:00:02.219) 0:03:26.914 ******** 2026-04-02 01:09:43.150848 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:09:43.150855 | orchestrator | 2026-04-02 01:09:43.150863 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-04-02 01:09:43.150869 | orchestrator | Thursday 02 April 2026 01:08:34 +0000 (0:00:02.467) 0:03:29.382 ******** 2026-04-02 01:09:43.150875 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:09:43.150882 | orchestrator | 2026-04-02 01:09:43.150889 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-04-02 01:09:43.150896 | orchestrator | Thursday 02 April 2026 01:08:55 +0000 (0:00:20.742) 0:03:50.124 ******** 2026-04-02 01:09:43.150903 | orchestrator | 2026-04-02 01:09:43.150910 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-04-02 01:09:43.150917 | orchestrator | Thursday 02 April 2026 01:08:55 +0000 (0:00:00.065) 0:03:50.190 ******** 2026-04-02 01:09:43.150924 | orchestrator | 2026-04-02 01:09:43.150931 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-04-02 01:09:43.150939 | orchestrator | Thursday 02 April 2026 01:08:55 +0000 (0:00:00.065) 0:03:50.256 ******** 2026-04-02 01:09:43.150946 | orchestrator | 2026-04-02 01:09:43.150953 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-04-02 01:09:43.150960 | orchestrator | Thursday 02 April 2026 01:08:55 +0000 (0:00:00.067) 0:03:50.323 ******** 2026-04-02 01:09:43.150967 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:09:43.150974 | orchestrator | changed: [testbed-node-1] 2026-04-02 01:09:43.150981 | orchestrator | changed: [testbed-node-2] 2026-04-02 01:09:43.150988 | orchestrator | 2026-04-02 01:09:43.150995 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-04-02 01:09:43.151003 | orchestrator | Thursday 02 April 2026 01:09:04 +0000 (0:00:08.716) 0:03:59.040 ******** 2026-04-02 01:09:43.151010 | orchestrator | changed: [testbed-node-2] 2026-04-02 01:09:43.151017 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:09:43.151024 | orchestrator | changed: [testbed-node-1] 2026-04-02 01:09:43.151031 | orchestrator | 2026-04-02 01:09:43.151038 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-04-02 01:09:43.151045 | orchestrator | Thursday 02 April 2026 01:09:16 +0000 (0:00:11.716) 0:04:10.757 ******** 2026-04-02 01:09:43.151070 | orchestrator | changed: [testbed-node-2] 2026-04-02 01:09:43.151083 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:09:43.151090 | orchestrator | changed: [testbed-node-1] 2026-04-02 01:09:43.151096 | orchestrator | 2026-04-02 01:09:43.151102 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-04-02 01:09:43.151108 | orchestrator | Thursday 02 April 2026 01:09:26 +0000 (0:00:09.939) 0:04:20.696 ******** 2026-04-02 01:09:43.151115 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:09:43.151122 | orchestrator | changed: [testbed-node-2] 2026-04-02 01:09:43.151128 | orchestrator | changed: [testbed-node-1] 2026-04-02 01:09:43.151135 | orchestrator | 2026-04-02 01:09:43.151141 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-04-02 01:09:43.151148 | orchestrator | Thursday 02 April 2026 01:09:36 +0000 (0:00:10.279) 0:04:30.975 ******** 2026-04-02 01:09:43.151155 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:09:43.151163 | orchestrator | changed: [testbed-node-1] 2026-04-02 01:09:43.151170 | orchestrator | changed: [testbed-node-2] 2026-04-02 01:09:43.151176 | orchestrator | 2026-04-02 01:09:43.151183 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-02 01:09:43.151191 | orchestrator | testbed-node-0 : ok=57  changed=38  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-02 01:09:43.151199 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-02 01:09:43.151206 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-02 01:09:43.151213 | orchestrator | 2026-04-02 01:09:43.151220 | orchestrator | 2026-04-02 01:09:43.151226 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-02 01:09:43.151233 | orchestrator | Thursday 02 April 2026 01:09:41 +0000 (0:00:05.037) 0:04:36.012 ******** 2026-04-02 01:09:43.151245 | orchestrator | =============================================================================== 2026-04-02 01:09:43.151252 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 20.74s 2026-04-02 01:09:43.151259 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 16.39s 2026-04-02 01:09:43.151266 | orchestrator | octavia : Adding octavia related roles --------------------------------- 15.61s 2026-04-02 01:09:43.151273 | orchestrator | octavia : Add rules for security groups -------------------------------- 15.55s 2026-04-02 01:09:43.151280 | orchestrator | octavia : Restart octavia-driver-agent container ----------------------- 11.72s 2026-04-02 01:09:43.151287 | orchestrator | octavia : Create security groups for octavia --------------------------- 10.91s 2026-04-02 01:09:43.151294 | orchestrator | octavia : Restart octavia-housekeeping container ----------------------- 10.28s 2026-04-02 01:09:43.151301 | orchestrator | octavia : Restart octavia-health-manager container ---------------------- 9.94s 2026-04-02 01:09:43.151312 | orchestrator | octavia : Restart octavia-api container --------------------------------- 8.72s 2026-04-02 01:09:43.151318 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.82s 2026-04-02 01:09:43.151325 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 7.15s 2026-04-02 01:09:43.151332 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 6.88s 2026-04-02 01:09:43.151339 | orchestrator | octavia : Get security groups for octavia ------------------------------- 6.46s 2026-04-02 01:09:43.151345 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 5.90s 2026-04-02 01:09:43.151352 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 5.66s 2026-04-02 01:09:43.151358 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 5.51s 2026-04-02 01:09:43.151365 | orchestrator | service-cert-copy : octavia | Copying over extra CA certificates -------- 5.41s 2026-04-02 01:09:43.151371 | orchestrator | octavia : Copying over config.json files for services ------------------- 5.20s 2026-04-02 01:09:43.151383 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 5.16s 2026-04-02 01:09:43.151389 | orchestrator | octavia : Restart octavia-worker container ------------------------------ 5.04s 2026-04-02 01:09:43.151395 | orchestrator | 2026-04-02 01:09:43 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-02 01:09:46.184234 | orchestrator | 2026-04-02 01:09:46 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-02 01:09:49.224741 | orchestrator | 2026-04-02 01:09:49 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-02 01:09:52.265505 | orchestrator | 2026-04-02 01:09:52 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-02 01:09:55.316363 | orchestrator | 2026-04-02 01:09:55 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-02 01:09:58.372567 | orchestrator | 2026-04-02 01:09:58 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-02 01:10:01.427810 | orchestrator | 2026-04-02 01:10:01 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-02 01:10:04.470943 | orchestrator | 2026-04-02 01:10:04 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-02 01:10:07.508130 | orchestrator | 2026-04-02 01:10:07 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-02 01:10:10.547582 | orchestrator | 2026-04-02 01:10:10 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-02 01:10:13.591111 | orchestrator | 2026-04-02 01:10:13 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-02 01:10:16.634941 | orchestrator | 2026-04-02 01:10:16 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-02 01:10:19.679621 | orchestrator | 2026-04-02 01:10:19 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-02 01:10:22.719782 | orchestrator | 2026-04-02 01:10:22 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-02 01:10:25.757470 | orchestrator | 2026-04-02 01:10:25 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-02 01:10:28.804924 | orchestrator | 2026-04-02 01:10:28 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-02 01:10:31.841875 | orchestrator | 2026-04-02 01:10:31 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-02 01:10:34.876417 | orchestrator | 2026-04-02 01:10:34 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-02 01:10:37.921084 | orchestrator | 2026-04-02 01:10:37 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-02 01:10:40.964497 | orchestrator | 2026-04-02 01:10:40 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-02 01:10:44.009664 | orchestrator | 2026-04-02 01:10:44.183498 | orchestrator | 2026-04-02 01:10:44.190253 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Thu Apr 2 01:10:44 UTC 2026 2026-04-02 01:10:44.190338 | orchestrator | 2026-04-02 01:10:44.575594 | orchestrator | ok: Runtime: 0:31:36.505691 2026-04-02 01:10:44.847873 | 2026-04-02 01:10:44.848050 | TASK [Bootstrap services] 2026-04-02 01:10:45.696099 | orchestrator | 2026-04-02 01:10:45.696248 | orchestrator | # BOOTSTRAP 2026-04-02 01:10:45.696267 | orchestrator | 2026-04-02 01:10:45.696274 | orchestrator | + set -e 2026-04-02 01:10:45.696281 | orchestrator | + echo 2026-04-02 01:10:45.696290 | orchestrator | + echo '# BOOTSTRAP' 2026-04-02 01:10:45.696300 | orchestrator | + echo 2026-04-02 01:10:45.696328 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2026-04-02 01:10:45.705016 | orchestrator | + set -e 2026-04-02 01:10:45.705097 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2026-04-02 01:10:50.228101 | orchestrator | 2026-04-02 01:10:50 | INFO  | It takes a moment until task d5b5683a-c397-451f-a5c8-0d614480d369 (flavor-manager) has been started and output is visible here. 2026-04-02 01:10:59.409987 | orchestrator | 2026-04-02 01:10:54 | INFO  | Flavor SCS-1L-1 created 2026-04-02 01:10:59.410120 | orchestrator | 2026-04-02 01:10:54 | INFO  | Flavor SCS-1L-1-5 created 2026-04-02 01:10:59.410134 | orchestrator | 2026-04-02 01:10:55 | INFO  | Flavor SCS-1V-2 created 2026-04-02 01:10:59.410139 | orchestrator | 2026-04-02 01:10:55 | INFO  | Flavor SCS-1V-2-5 created 2026-04-02 01:10:59.410143 | orchestrator | 2026-04-02 01:10:55 | INFO  | Flavor SCS-1V-4 created 2026-04-02 01:10:59.410147 | orchestrator | 2026-04-02 01:10:55 | INFO  | Flavor SCS-1V-4-10 created 2026-04-02 01:10:59.410151 | orchestrator | 2026-04-02 01:10:55 | INFO  | Flavor SCS-1V-8 created 2026-04-02 01:10:59.410156 | orchestrator | 2026-04-02 01:10:56 | INFO  | Flavor SCS-1V-8-20 created 2026-04-02 01:10:59.410167 | orchestrator | 2026-04-02 01:10:56 | INFO  | Flavor SCS-2V-4 created 2026-04-02 01:10:59.410172 | orchestrator | 2026-04-02 01:10:56 | INFO  | Flavor SCS-2V-4-10 created 2026-04-02 01:10:59.410175 | orchestrator | 2026-04-02 01:10:56 | INFO  | Flavor SCS-2V-8 created 2026-04-02 01:10:59.410179 | orchestrator | 2026-04-02 01:10:56 | INFO  | Flavor SCS-2V-8-20 created 2026-04-02 01:10:59.410183 | orchestrator | 2026-04-02 01:10:56 | INFO  | Flavor SCS-2V-16 created 2026-04-02 01:10:59.410187 | orchestrator | 2026-04-02 01:10:56 | INFO  | Flavor SCS-2V-16-50 created 2026-04-02 01:10:59.410190 | orchestrator | 2026-04-02 01:10:57 | INFO  | Flavor SCS-4V-8 created 2026-04-02 01:10:59.410194 | orchestrator | 2026-04-02 01:10:57 | INFO  | Flavor SCS-4V-8-20 created 2026-04-02 01:10:59.410198 | orchestrator | 2026-04-02 01:10:57 | INFO  | Flavor SCS-4V-16 created 2026-04-02 01:10:59.410202 | orchestrator | 2026-04-02 01:10:57 | INFO  | Flavor SCS-4V-16-50 created 2026-04-02 01:10:59.410205 | orchestrator | 2026-04-02 01:10:57 | INFO  | Flavor SCS-4V-32 created 2026-04-02 01:10:59.410209 | orchestrator | 2026-04-02 01:10:57 | INFO  | Flavor SCS-4V-32-100 created 2026-04-02 01:10:59.410213 | orchestrator | 2026-04-02 01:10:57 | INFO  | Flavor SCS-8V-16 created 2026-04-02 01:10:59.410217 | orchestrator | 2026-04-02 01:10:57 | INFO  | Flavor SCS-8V-16-50 created 2026-04-02 01:10:59.410221 | orchestrator | 2026-04-02 01:10:57 | INFO  | Flavor SCS-8V-32 created 2026-04-02 01:10:59.410226 | orchestrator | 2026-04-02 01:10:58 | INFO  | Flavor SCS-8V-32-100 created 2026-04-02 01:10:59.410232 | orchestrator | 2026-04-02 01:10:58 | INFO  | Flavor SCS-16V-32 created 2026-04-02 01:10:59.410237 | orchestrator | 2026-04-02 01:10:58 | INFO  | Flavor SCS-16V-32-100 created 2026-04-02 01:10:59.410243 | orchestrator | 2026-04-02 01:10:58 | INFO  | Flavor SCS-2V-4-20s created 2026-04-02 01:10:59.410249 | orchestrator | 2026-04-02 01:10:58 | INFO  | Flavor SCS-4V-8-50s created 2026-04-02 01:10:59.410255 | orchestrator | 2026-04-02 01:10:58 | INFO  | Flavor SCS-4V-16-100s created 2026-04-02 01:10:59.410262 | orchestrator | 2026-04-02 01:10:59 | INFO  | Flavor SCS-8V-32-100s created 2026-04-02 01:11:00.934903 | orchestrator | 2026-04-02 01:11:00 | INFO  | Trying to run play bootstrap-basic in environment openstack 2026-04-02 01:11:10.983085 | orchestrator | 2026-04-02 01:11:10 | INFO  | Prepare task for execution of bootstrap-basic. 2026-04-02 01:11:11.064276 | orchestrator | 2026-04-02 01:11:11 | INFO  | Task 2a68eca1-0092-46df-97b2-c52bf5ed3227 (bootstrap-basic) was prepared for execution. 2026-04-02 01:11:11.064405 | orchestrator | 2026-04-02 01:11:11 | INFO  | It takes a moment until task 2a68eca1-0092-46df-97b2-c52bf5ed3227 (bootstrap-basic) has been started and output is visible here. 2026-04-02 01:11:57.371697 | orchestrator | 2026-04-02 01:11:57.371803 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2026-04-02 01:11:57.371816 | orchestrator | 2026-04-02 01:11:57.371898 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-02 01:11:57.371904 | orchestrator | Thursday 02 April 2026 01:11:14 +0000 (0:00:00.103) 0:00:00.103 ******** 2026-04-02 01:11:57.371909 | orchestrator | ok: [localhost] 2026-04-02 01:11:57.371914 | orchestrator | 2026-04-02 01:11:57.371918 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2026-04-02 01:11:57.371922 | orchestrator | Thursday 02 April 2026 01:11:16 +0000 (0:00:02.019) 0:00:02.123 ******** 2026-04-02 01:11:57.371928 | orchestrator | ok: [localhost] 2026-04-02 01:11:57.371933 | orchestrator | 2026-04-02 01:11:57.371937 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2026-04-02 01:11:57.371941 | orchestrator | Thursday 02 April 2026 01:11:25 +0000 (0:00:09.602) 0:00:11.726 ******** 2026-04-02 01:11:57.371945 | orchestrator | changed: [localhost] 2026-04-02 01:11:57.371949 | orchestrator | 2026-04-02 01:11:57.371953 | orchestrator | TASK [Create public network] *************************************************** 2026-04-02 01:11:57.371958 | orchestrator | Thursday 02 April 2026 01:11:33 +0000 (0:00:07.956) 0:00:19.682 ******** 2026-04-02 01:11:57.371964 | orchestrator | changed: [localhost] 2026-04-02 01:11:57.371970 | orchestrator | 2026-04-02 01:11:57.371981 | orchestrator | TASK [Set public network to default] ******************************************* 2026-04-02 01:11:57.371987 | orchestrator | Thursday 02 April 2026 01:11:38 +0000 (0:00:04.984) 0:00:24.667 ******** 2026-04-02 01:11:57.371993 | orchestrator | changed: [localhost] 2026-04-02 01:11:57.371999 | orchestrator | 2026-04-02 01:11:57.372005 | orchestrator | TASK [Create public subnet] **************************************************** 2026-04-02 01:11:57.372011 | orchestrator | Thursday 02 April 2026 01:11:45 +0000 (0:00:06.315) 0:00:30.983 ******** 2026-04-02 01:11:57.372017 | orchestrator | changed: [localhost] 2026-04-02 01:11:57.372023 | orchestrator | 2026-04-02 01:11:57.372029 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2026-04-02 01:11:57.372035 | orchestrator | Thursday 02 April 2026 01:11:49 +0000 (0:00:04.383) 0:00:35.367 ******** 2026-04-02 01:11:57.372041 | orchestrator | changed: [localhost] 2026-04-02 01:11:57.372047 | orchestrator | 2026-04-02 01:11:57.372064 | orchestrator | TASK [Create manager role] ***************************************************** 2026-04-02 01:11:57.372083 | orchestrator | Thursday 02 April 2026 01:11:53 +0000 (0:00:03.966) 0:00:39.333 ******** 2026-04-02 01:11:57.372091 | orchestrator | ok: [localhost] 2026-04-02 01:11:57.372098 | orchestrator | 2026-04-02 01:11:57.372105 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-02 01:11:57.372111 | orchestrator | localhost : ok=8  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-02 01:11:57.372119 | orchestrator | 2026-04-02 01:11:57.372126 | orchestrator | 2026-04-02 01:11:57.372132 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-02 01:11:57.372139 | orchestrator | Thursday 02 April 2026 01:11:57 +0000 (0:00:03.657) 0:00:42.991 ******** 2026-04-02 01:11:57.372146 | orchestrator | =============================================================================== 2026-04-02 01:11:57.372152 | orchestrator | Get volume type LUKS ---------------------------------------------------- 9.60s 2026-04-02 01:11:57.372181 | orchestrator | Create volume type LUKS ------------------------------------------------- 7.96s 2026-04-02 01:11:57.372185 | orchestrator | Set public network to default ------------------------------------------- 6.32s 2026-04-02 01:11:57.372189 | orchestrator | Create public network --------------------------------------------------- 4.99s 2026-04-02 01:11:57.372193 | orchestrator | Create public subnet ---------------------------------------------------- 4.38s 2026-04-02 01:11:57.372197 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 3.97s 2026-04-02 01:11:57.372201 | orchestrator | Create manager role ----------------------------------------------------- 3.66s 2026-04-02 01:11:57.372205 | orchestrator | Gathering Facts --------------------------------------------------------- 2.02s 2026-04-02 01:11:59.280799 | orchestrator | 2026-04-02 01:11:59 | INFO  | It takes a moment until task c2e36a94-7d0a-4e2b-8793-3db9baecdaf8 (image-manager) has been started and output is visible here. 2026-04-02 01:12:42.234881 | orchestrator | 2026-04-02 01:12:02 | INFO  | Processing image 'Cirros 0.6.2' 2026-04-02 01:12:42.234997 | orchestrator | 2026-04-02 01:12:02 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2026-04-02 01:12:42.235021 | orchestrator | 2026-04-02 01:12:02 | INFO  | Importing image Cirros 0.6.2 2026-04-02 01:12:42.235037 | orchestrator | 2026-04-02 01:12:02 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-04-02 01:12:42.235047 | orchestrator | 2026-04-02 01:12:04 | INFO  | Waiting for image to leave queued state... 2026-04-02 01:12:42.235062 | orchestrator | 2026-04-02 01:12:06 | INFO  | Waiting for import to complete... 2026-04-02 01:12:42.235079 | orchestrator | 2026-04-02 01:12:17 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2026-04-02 01:12:42.235099 | orchestrator | 2026-04-02 01:12:17 | INFO  | Checking parameters of 'Cirros 0.6.2' 2026-04-02 01:12:42.235112 | orchestrator | 2026-04-02 01:12:17 | INFO  | Setting internal_version = 0.6.2 2026-04-02 01:12:42.235126 | orchestrator | 2026-04-02 01:12:17 | INFO  | Setting image_original_user = cirros 2026-04-02 01:12:42.235139 | orchestrator | 2026-04-02 01:12:17 | INFO  | Adding tag os:cirros 2026-04-02 01:12:42.235152 | orchestrator | 2026-04-02 01:12:17 | INFO  | Setting property architecture: x86_64 2026-04-02 01:12:42.235164 | orchestrator | 2026-04-02 01:12:18 | INFO  | Setting property hw_disk_bus: scsi 2026-04-02 01:12:42.235176 | orchestrator | 2026-04-02 01:12:18 | INFO  | Setting property hw_rng_model: virtio 2026-04-02 01:12:42.235189 | orchestrator | 2026-04-02 01:12:18 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-04-02 01:12:42.235202 | orchestrator | 2026-04-02 01:12:18 | INFO  | Setting property hw_watchdog_action: reset 2026-04-02 01:12:42.235215 | orchestrator | 2026-04-02 01:12:19 | INFO  | Setting property hypervisor_type: qemu 2026-04-02 01:12:42.235242 | orchestrator | 2026-04-02 01:12:19 | INFO  | Setting property os_distro: cirros 2026-04-02 01:12:42.235256 | orchestrator | 2026-04-02 01:12:19 | INFO  | Setting property os_purpose: minimal 2026-04-02 01:12:42.235270 | orchestrator | 2026-04-02 01:12:19 | INFO  | Setting property replace_frequency: never 2026-04-02 01:12:42.235283 | orchestrator | 2026-04-02 01:12:19 | INFO  | Setting property uuid_validity: none 2026-04-02 01:12:42.235298 | orchestrator | 2026-04-02 01:12:20 | INFO  | Setting property provided_until: none 2026-04-02 01:12:42.235311 | orchestrator | 2026-04-02 01:12:20 | INFO  | Setting property image_description: Cirros 2026-04-02 01:12:42.235326 | orchestrator | 2026-04-02 01:12:20 | INFO  | Setting property image_name: Cirros 2026-04-02 01:12:42.235369 | orchestrator | 2026-04-02 01:12:20 | INFO  | Setting property internal_version: 0.6.2 2026-04-02 01:12:42.235380 | orchestrator | 2026-04-02 01:12:20 | INFO  | Setting property image_original_user: cirros 2026-04-02 01:12:42.235389 | orchestrator | 2026-04-02 01:12:21 | INFO  | Setting property os_version: 0.6.2 2026-04-02 01:12:42.235400 | orchestrator | 2026-04-02 01:12:21 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-04-02 01:12:42.235412 | orchestrator | 2026-04-02 01:12:21 | INFO  | Setting property image_build_date: 2023-05-30 2026-04-02 01:12:42.235421 | orchestrator | 2026-04-02 01:12:21 | INFO  | Checking status of 'Cirros 0.6.2' 2026-04-02 01:12:42.235431 | orchestrator | 2026-04-02 01:12:21 | INFO  | Checking visibility of 'Cirros 0.6.2' 2026-04-02 01:12:42.235444 | orchestrator | 2026-04-02 01:12:21 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2026-04-02 01:12:42.235454 | orchestrator | 2026-04-02 01:12:21 | INFO  | Processing image 'Cirros 0.6.3' 2026-04-02 01:12:42.235463 | orchestrator | 2026-04-02 01:12:22 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2026-04-02 01:12:42.235473 | orchestrator | 2026-04-02 01:12:22 | INFO  | Importing image Cirros 0.6.3 2026-04-02 01:12:42.235482 | orchestrator | 2026-04-02 01:12:22 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-04-02 01:12:42.235492 | orchestrator | 2026-04-02 01:12:23 | INFO  | Waiting for image to leave queued state... 2026-04-02 01:12:42.235502 | orchestrator | 2026-04-02 01:12:25 | INFO  | Waiting for import to complete... 2026-04-02 01:12:42.235529 | orchestrator | 2026-04-02 01:12:36 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2026-04-02 01:12:42.235538 | orchestrator | 2026-04-02 01:12:36 | INFO  | Checking parameters of 'Cirros 0.6.3' 2026-04-02 01:12:42.235546 | orchestrator | 2026-04-02 01:12:36 | INFO  | Setting internal_version = 0.6.3 2026-04-02 01:12:42.235554 | orchestrator | 2026-04-02 01:12:36 | INFO  | Setting image_original_user = cirros 2026-04-02 01:12:42.235562 | orchestrator | 2026-04-02 01:12:36 | INFO  | Adding tag os:cirros 2026-04-02 01:12:42.235569 | orchestrator | 2026-04-02 01:12:36 | INFO  | Setting property architecture: x86_64 2026-04-02 01:12:42.235577 | orchestrator | 2026-04-02 01:12:37 | INFO  | Setting property hw_disk_bus: scsi 2026-04-02 01:12:42.235585 | orchestrator | 2026-04-02 01:12:37 | INFO  | Setting property hw_rng_model: virtio 2026-04-02 01:12:42.235593 | orchestrator | 2026-04-02 01:12:37 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-04-02 01:12:42.235601 | orchestrator | 2026-04-02 01:12:37 | INFO  | Setting property hw_watchdog_action: reset 2026-04-02 01:12:42.235609 | orchestrator | 2026-04-02 01:12:37 | INFO  | Setting property hypervisor_type: qemu 2026-04-02 01:12:42.235617 | orchestrator | 2026-04-02 01:12:38 | INFO  | Setting property os_distro: cirros 2026-04-02 01:12:42.235625 | orchestrator | 2026-04-02 01:12:38 | INFO  | Setting property os_purpose: minimal 2026-04-02 01:12:42.235638 | orchestrator | 2026-04-02 01:12:38 | INFO  | Setting property replace_frequency: never 2026-04-02 01:12:42.235650 | orchestrator | 2026-04-02 01:12:38 | INFO  | Setting property uuid_validity: none 2026-04-02 01:12:42.235673 | orchestrator | 2026-04-02 01:12:39 | INFO  | Setting property provided_until: none 2026-04-02 01:12:42.235685 | orchestrator | 2026-04-02 01:12:39 | INFO  | Setting property image_description: Cirros 2026-04-02 01:12:42.235710 | orchestrator | 2026-04-02 01:12:39 | INFO  | Setting property image_name: Cirros 2026-04-02 01:12:42.235722 | orchestrator | 2026-04-02 01:12:40 | INFO  | Setting property internal_version: 0.6.3 2026-04-02 01:12:42.235735 | orchestrator | 2026-04-02 01:12:40 | INFO  | Setting property image_original_user: cirros 2026-04-02 01:12:42.235776 | orchestrator | 2026-04-02 01:12:40 | INFO  | Setting property os_version: 0.6.3 2026-04-02 01:12:42.235790 | orchestrator | 2026-04-02 01:12:40 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-04-02 01:12:42.235802 | orchestrator | 2026-04-02 01:12:41 | INFO  | Setting property image_build_date: 2024-09-26 2026-04-02 01:12:42.235815 | orchestrator | 2026-04-02 01:12:41 | INFO  | Checking status of 'Cirros 0.6.3' 2026-04-02 01:12:42.235828 | orchestrator | 2026-04-02 01:12:41 | INFO  | Checking visibility of 'Cirros 0.6.3' 2026-04-02 01:12:42.235841 | orchestrator | 2026-04-02 01:12:41 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2026-04-02 01:12:42.466353 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2026-04-02 01:12:44.333554 | orchestrator | 2026-04-02 01:12:44 | INFO  | date: 2026-04-01 2026-04-02 01:12:44.333648 | orchestrator | 2026-04-02 01:12:44 | INFO  | image: octavia-amphora-haproxy-2024.2.20260401.qcow2 2026-04-02 01:12:44.333684 | orchestrator | 2026-04-02 01:12:44 | INFO  | url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260401.qcow2 2026-04-02 01:12:44.333695 | orchestrator | 2026-04-02 01:12:44 | INFO  | checksum_url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260401.qcow2.CHECKSUM 2026-04-02 01:12:44.514870 | orchestrator | 2026-04-02 01:12:44 | INFO  | checksum: 0f812ada1f9f7fafccd5ba5ef13a25cb260312d3fbf4c01e89f52737a8afc7ff 2026-04-02 01:12:44.620547 | orchestrator | 2026-04-02 01:12:44 | INFO  | It takes a moment until task aac87656-3b3c-4060-9fea-210d2d4096e7 (image-manager) has been started and output is visible here. 2026-04-02 01:13:47.511942 | orchestrator | 2026-04-02 01:12:46 | INFO  | Processing image 'OpenStack Octavia Amphora 2026-04-01' 2026-04-02 01:13:47.512057 | orchestrator | 2026-04-02 01:12:46 | INFO  | Tested URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260401.qcow2: 200 2026-04-02 01:13:47.512071 | orchestrator | 2026-04-02 01:12:46 | INFO  | Importing image OpenStack Octavia Amphora 2026-04-01 2026-04-02 01:13:47.512078 | orchestrator | 2026-04-02 01:12:46 | INFO  | Importing from URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260401.qcow2 2026-04-02 01:13:47.512086 | orchestrator | 2026-04-02 01:12:48 | INFO  | Waiting for image to leave queued state... 2026-04-02 01:13:47.512093 | orchestrator | 2026-04-02 01:12:50 | INFO  | Waiting for import to complete... 2026-04-02 01:13:47.512100 | orchestrator | 2026-04-02 01:13:01 | INFO  | Waiting for import to complete... 2026-04-02 01:13:47.512106 | orchestrator | 2026-04-02 01:13:11 | INFO  | Waiting for import to complete... 2026-04-02 01:13:47.512113 | orchestrator | 2026-04-02 01:13:21 | INFO  | Waiting for import to complete... 2026-04-02 01:13:47.512123 | orchestrator | 2026-04-02 01:13:31 | INFO  | Waiting for import to complete... 2026-04-02 01:13:47.512130 | orchestrator | 2026-04-02 01:13:41 | INFO  | Import of 'OpenStack Octavia Amphora 2026-04-01' successfully completed, reloading images 2026-04-02 01:13:47.512169 | orchestrator | 2026-04-02 01:13:42 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2026-04-01' 2026-04-02 01:13:47.512176 | orchestrator | 2026-04-02 01:13:42 | INFO  | Setting internal_version = 2026-04-01 2026-04-02 01:13:47.512182 | orchestrator | 2026-04-02 01:13:42 | INFO  | Setting image_original_user = ubuntu 2026-04-02 01:13:47.512190 | orchestrator | 2026-04-02 01:13:42 | INFO  | Adding tag amphora 2026-04-02 01:13:47.512196 | orchestrator | 2026-04-02 01:13:42 | INFO  | Adding tag os:ubuntu 2026-04-02 01:13:47.512203 | orchestrator | 2026-04-02 01:13:42 | INFO  | Setting property architecture: x86_64 2026-04-02 01:13:47.512209 | orchestrator | 2026-04-02 01:13:42 | INFO  | Setting property hw_disk_bus: scsi 2026-04-02 01:13:47.512215 | orchestrator | 2026-04-02 01:13:43 | INFO  | Setting property hw_rng_model: virtio 2026-04-02 01:13:47.512221 | orchestrator | 2026-04-02 01:13:43 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-04-02 01:13:47.512227 | orchestrator | 2026-04-02 01:13:43 | INFO  | Setting property hw_watchdog_action: reset 2026-04-02 01:13:47.512233 | orchestrator | 2026-04-02 01:13:43 | INFO  | Setting property hypervisor_type: qemu 2026-04-02 01:13:47.512239 | orchestrator | 2026-04-02 01:13:44 | INFO  | Setting property os_distro: ubuntu 2026-04-02 01:13:47.512246 | orchestrator | 2026-04-02 01:13:44 | INFO  | Setting property replace_frequency: quarterly 2026-04-02 01:13:47.512252 | orchestrator | 2026-04-02 01:13:44 | INFO  | Setting property uuid_validity: last-1 2026-04-02 01:13:47.512258 | orchestrator | 2026-04-02 01:13:44 | INFO  | Setting property provided_until: none 2026-04-02 01:13:47.512265 | orchestrator | 2026-04-02 01:13:45 | INFO  | Setting property os_purpose: network 2026-04-02 01:13:47.512271 | orchestrator | 2026-04-02 01:13:45 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2026-04-02 01:13:47.512292 | orchestrator | 2026-04-02 01:13:45 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2026-04-02 01:13:47.512299 | orchestrator | 2026-04-02 01:13:45 | INFO  | Setting property internal_version: 2026-04-01 2026-04-02 01:13:47.512306 | orchestrator | 2026-04-02 01:13:46 | INFO  | Setting property image_original_user: ubuntu 2026-04-02 01:13:47.512313 | orchestrator | 2026-04-02 01:13:46 | INFO  | Setting property os_version: 2026-04-01 2026-04-02 01:13:47.512320 | orchestrator | 2026-04-02 01:13:46 | INFO  | Setting property image_source: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260401.qcow2 2026-04-02 01:13:47.512326 | orchestrator | 2026-04-02 01:13:46 | INFO  | Setting property image_build_date: 2026-04-01 2026-04-02 01:13:47.512332 | orchestrator | 2026-04-02 01:13:47 | INFO  | Checking status of 'OpenStack Octavia Amphora 2026-04-01' 2026-04-02 01:13:47.512339 | orchestrator | 2026-04-02 01:13:47 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2026-04-01' 2026-04-02 01:13:47.512345 | orchestrator | 2026-04-02 01:13:47 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2026-04-02 01:13:47.512366 | orchestrator | 2026-04-02 01:13:47 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2026-04-02 01:13:47.512375 | orchestrator | 2026-04-02 01:13:47 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2026-04-02 01:13:47.512381 | orchestrator | 2026-04-02 01:13:47 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2026-04-02 01:13:48.000013 | orchestrator | ok: Runtime: 0:03:02.468344 2026-04-02 01:13:48.022914 | 2026-04-02 01:13:48.023067 | TASK [Run checks] 2026-04-02 01:13:48.781163 | orchestrator | + set -e 2026-04-02 01:13:48.781325 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-02 01:13:48.781341 | orchestrator | ++ export INTERACTIVE=false 2026-04-02 01:13:48.781352 | orchestrator | ++ INTERACTIVE=false 2026-04-02 01:13:48.781360 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-02 01:13:48.781368 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-02 01:13:48.781376 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-04-02 01:13:48.782347 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-04-02 01:13:48.788715 | orchestrator | 2026-04-02 01:13:48.788820 | orchestrator | # CHECK 2026-04-02 01:13:48.788834 | orchestrator | 2026-04-02 01:13:48.788841 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-02 01:13:48.788854 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-02 01:13:48.788861 | orchestrator | + echo 2026-04-02 01:13:48.788868 | orchestrator | + echo '# CHECK' 2026-04-02 01:13:48.788875 | orchestrator | + echo 2026-04-02 01:13:48.788887 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-02 01:13:48.789666 | orchestrator | ++ semver latest 5.0.0 2026-04-02 01:13:48.848707 | orchestrator | 2026-04-02 01:13:48.848799 | orchestrator | ## Containers @ testbed-manager 2026-04-02 01:13:48.848809 | orchestrator | 2026-04-02 01:13:48.848819 | orchestrator | + [[ -1 -eq -1 ]] 2026-04-02 01:13:48.848827 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-02 01:13:48.848834 | orchestrator | + echo 2026-04-02 01:13:48.848840 | orchestrator | + echo '## Containers @ testbed-manager' 2026-04-02 01:13:48.848848 | orchestrator | + echo 2026-04-02 01:13:48.848855 | orchestrator | + osism container testbed-manager ps 2026-04-02 01:13:49.897071 | orchestrator | 2026-04-02 01:13:49 | INFO  | Creating empty known_hosts file: /share/known_hosts 2026-04-02 01:13:50.256041 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-02 01:13:50.256209 | orchestrator | 0e6e54a9d3fb registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_blackbox_exporter 2026-04-02 01:13:50.256233 | orchestrator | effe4ecff23d registry.osism.tech/kolla/prometheus-alertmanager:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_alertmanager 2026-04-02 01:13:50.256245 | orchestrator | b2576e8471c6 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_cadvisor 2026-04-02 01:13:50.256252 | orchestrator | 6786deec2dac registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2026-04-02 01:13:50.256291 | orchestrator | a17ce5aadb35 registry.osism.tech/kolla/prometheus-v2-server:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_server 2026-04-02 01:13:50.256300 | orchestrator | b7396279582e registry.osism.tech/osism/cephclient:reef "/usr/bin/dumb-init …" 16 minutes ago Up 16 minutes cephclient 2026-04-02 01:13:50.256308 | orchestrator | adbb1710b5d8 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes cron 2026-04-02 01:13:50.256315 | orchestrator | c77d7bc5fddb registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes kolla_toolbox 2026-04-02 01:13:50.256348 | orchestrator | 32b6d867a1dd registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes fluentd 2026-04-02 01:13:50.256356 | orchestrator | dd2e66a3f765 phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 28 minutes ago Up 28 minutes (healthy) 80/tcp phpmyadmin 2026-04-02 01:13:50.256735 | orchestrator | db1ea2811186 registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 29 minutes ago Up 28 minutes openstackclient 2026-04-02 01:13:50.256748 | orchestrator | 0ee004903596 registry.osism.tech/osism/homer:v25.10.1 "/bin/sh /entrypoint…" 29 minutes ago Up 29 minutes (healthy) 8080/tcp homer 2026-04-02 01:13:50.256754 | orchestrator | 130fc051a2f4 registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 52 minutes ago Up 51 minutes (healthy) 192.168.16.5:3128->3128/tcp squid 2026-04-02 01:13:50.256761 | orchestrator | e06827e43adb registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" 56 minutes ago Up 35 minutes (healthy) manager-inventory_reconciler-1 2026-04-02 01:13:50.256768 | orchestrator | 7119dd7a2026 registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" 56 minutes ago Up 35 minutes (healthy) osism-ansible 2026-04-02 01:13:50.256775 | orchestrator | e23d3abf2620 registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" 56 minutes ago Up 35 minutes (healthy) osism-kubernetes 2026-04-02 01:13:50.256787 | orchestrator | a22f81386c6f registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" 56 minutes ago Up 35 minutes (healthy) ceph-ansible 2026-04-02 01:13:50.256795 | orchestrator | 6a7f70b6ab1b registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" 56 minutes ago Up 35 minutes (healthy) kolla-ansible 2026-04-02 01:13:50.256801 | orchestrator | 8a51b27ab6f6 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" 56 minutes ago Up 35 minutes (healthy) 8000/tcp manager-ara-server-1 2026-04-02 01:13:50.256808 | orchestrator | 43a63d04e16e registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 56 minutes ago Up 36 minutes (healthy) manager-flower-1 2026-04-02 01:13:50.256815 | orchestrator | a99f2fb2a04d registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" 56 minutes ago Up 36 minutes (healthy) 3306/tcp manager-mariadb-1 2026-04-02 01:13:50.256821 | orchestrator | 8edf64112027 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 56 minutes ago Up 36 minutes (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2026-04-02 01:13:50.256828 | orchestrator | 206be4c46341 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 56 minutes ago Up 36 minutes (healthy) manager-listener-1 2026-04-02 01:13:50.256842 | orchestrator | 32a99729aafc registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 56 minutes ago Up 36 minutes (healthy) manager-beat-1 2026-04-02 01:13:50.256849 | orchestrator | a999168f291e registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 56 minutes ago Up 36 minutes (healthy) manager-openstack-1 2026-04-02 01:13:50.256856 | orchestrator | ab51186a0ecd registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" 56 minutes ago Up 36 minutes (healthy) osismclient 2026-04-02 01:13:50.256869 | orchestrator | a437174fdd74 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" 56 minutes ago Up 36 minutes (healthy) 6379/tcp manager-redis-1 2026-04-02 01:13:50.256876 | orchestrator | 8b3746fce0a1 registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" 56 minutes ago Up 36 minutes 192.168.16.5:3000->3000/tcp osism-frontend 2026-04-02 01:13:50.256883 | orchestrator | 68821dd99380 registry.osism.tech/dockerhub/library/traefik:v3.5.0 "/entrypoint.sh trae…" 58 minutes ago Up 58 minutes (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2026-04-02 01:13:50.406336 | orchestrator | 2026-04-02 01:13:50.406429 | orchestrator | ## Images @ testbed-manager 2026-04-02 01:13:50.406440 | orchestrator | 2026-04-02 01:13:50.406447 | orchestrator | + echo 2026-04-02 01:13:50.406455 | orchestrator | + echo '## Images @ testbed-manager' 2026-04-02 01:13:50.406462 | orchestrator | + echo 2026-04-02 01:13:50.406473 | orchestrator | + osism container testbed-manager images 2026-04-02 01:13:51.825374 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-02 01:13:51.825478 | orchestrator | registry.osism.tech/osism/osism-ansible latest 48c263a07afd 59 minutes ago 638MB 2026-04-02 01:13:51.825490 | orchestrator | registry.osism.tech/osism/kolla-ansible 2024.2 62d0568f2470 About an hour ago 636MB 2026-04-02 01:13:51.825496 | orchestrator | registry.osism.tech/osism/osism-kubernetes latest 3d5e93cb4c88 About an hour ago 1.24GB 2026-04-02 01:13:51.825501 | orchestrator | registry.osism.tech/osism/ceph-ansible reef 4a8bbe1e6566 About an hour ago 585MB 2026-04-02 01:13:51.825526 | orchestrator | registry.osism.tech/osism/osism-frontend latest 8bc102416be8 About an hour ago 212MB 2026-04-02 01:13:51.825534 | orchestrator | registry.osism.tech/osism/osism latest 377b3c449eb8 About an hour ago 407MB 2026-04-02 01:13:51.825542 | orchestrator | registry.osism.tech/osism/inventory-reconciler latest d3469ee55591 About an hour ago 357MB 2026-04-02 01:13:51.825550 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 963bec3176cb 21 hours ago 239MB 2026-04-02 01:13:51.825555 | orchestrator | registry.osism.tech/osism/cephclient reef 36991001dab1 21 hours ago 453MB 2026-04-02 01:13:51.825560 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 ac581034c7ab 23 hours ago 590MB 2026-04-02 01:13:51.825565 | orchestrator | registry.osism.tech/kolla/cron 2024.2 7aaeb79a3a47 23 hours ago 277MB 2026-04-02 01:13:51.825569 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 ca79b4ef6a7a 23 hours ago 679MB 2026-04-02 01:13:51.825574 | orchestrator | registry.osism.tech/kolla/prometheus-blackbox-exporter 2024.2 53c830c0e0f1 23 hours ago 319MB 2026-04-02 01:13:51.825579 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 f9e2a732d2e4 23 hours ago 317MB 2026-04-02 01:13:51.825602 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 3a732237c7b8 23 hours ago 368MB 2026-04-02 01:13:51.825607 | orchestrator | registry.osism.tech/kolla/prometheus-alertmanager 2024.2 1c3aa3ea6015 23 hours ago 415MB 2026-04-02 01:13:51.825612 | orchestrator | registry.osism.tech/kolla/prometheus-v2-server 2024.2 ecbea13ff569 23 hours ago 850MB 2026-04-02 01:13:51.825617 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.7-alpine e08bd8d5a677 2 months ago 41.4MB 2026-04-02 01:13:51.825621 | orchestrator | registry.osism.tech/osism/homer v25.10.1 ea34b371c716 3 months ago 11.5MB 2026-04-02 01:13:51.825649 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.8.4 70745dd8f1d0 4 months ago 334MB 2026-04-02 01:13:51.825655 | orchestrator | phpmyadmin/phpmyadmin 5.2 e66b1f5a8c58 5 months ago 742MB 2026-04-02 01:13:51.825660 | orchestrator | registry.osism.tech/osism/ara-server 1.7.3 d1b687333f2f 7 months ago 275MB 2026-04-02 01:13:51.825884 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.5.0 11cc59587f6a 8 months ago 226MB 2026-04-02 01:13:51.825895 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 22 months ago 146MB 2026-04-02 01:13:51.964293 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-02 01:13:51.964607 | orchestrator | ++ semver latest 5.0.0 2026-04-02 01:13:52.014737 | orchestrator | 2026-04-02 01:13:52.014812 | orchestrator | ## Containers @ testbed-node-0 2026-04-02 01:13:52.014819 | orchestrator | 2026-04-02 01:13:52.014824 | orchestrator | + [[ -1 -eq -1 ]] 2026-04-02 01:13:52.014828 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-02 01:13:52.014832 | orchestrator | + echo 2026-04-02 01:13:52.014837 | orchestrator | + echo '## Containers @ testbed-node-0' 2026-04-02 01:13:52.014842 | orchestrator | + echo 2026-04-02 01:13:52.014848 | orchestrator | + osism container testbed-node-0 ps 2026-04-02 01:13:53.502291 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-02 01:13:53.502369 | orchestrator | 9822612bbe54 registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2026-04-02 01:13:53.502377 | orchestrator | 18482fb0093c registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2026-04-02 01:13:53.502382 | orchestrator | 15186bbf825a registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2026-04-02 01:13:53.502386 | orchestrator | a752c2cb7007 registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2026-04-02 01:13:53.502390 | orchestrator | 764f11a752e3 registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_api 2026-04-02 01:13:53.502394 | orchestrator | 8b52b6a69799 registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_conductor 2026-04-02 01:13:53.502398 | orchestrator | f43b298816fe registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2026-04-02 01:13:53.502415 | orchestrator | a822e5f9e2fa registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_api 2026-04-02 01:13:53.502419 | orchestrator | 21a83347a828 registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2026-04-02 01:13:53.502436 | orchestrator | 1b3aa7c9066f registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_novncproxy 2026-04-02 01:13:53.502440 | orchestrator | cdb7dd4c2cfd registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_conductor 2026-04-02 01:13:53.502444 | orchestrator | 17e84aaf935d registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_worker 2026-04-02 01:13:53.502448 | orchestrator | 8a7ca1a90267 registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_mdns 2026-04-02 01:13:53.502452 | orchestrator | 4dbd6c5eda17 registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) neutron_server 2026-04-02 01:13:53.502456 | orchestrator | 5aa8ab2ce920 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_producer 2026-04-02 01:13:53.502459 | orchestrator | d3662c4ddab3 registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_central 2026-04-02 01:13:53.502463 | orchestrator | 536568f46959 registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_api 2026-04-02 01:13:53.502467 | orchestrator | 0959defac589 registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_backend_bind9 2026-04-02 01:13:53.502471 | orchestrator | 7ecaedbc3b9f registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_worker 2026-04-02 01:13:53.502475 | orchestrator | f2c0eb6ebd95 registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) nova_api 2026-04-02 01:13:53.502479 | orchestrator | 3a1c21ed139b registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_keystone_listener 2026-04-02 01:13:53.502494 | orchestrator | 42a98d16c653 registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_api 2026-04-02 01:13:53.502498 | orchestrator | 33ff03ef746b registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 11 minutes ago Up 8 minutes (healthy) nova_scheduler 2026-04-02 01:13:53.502502 | orchestrator | 1522100e2911 registry.osism.tech/kolla/cinder-backup:2024.2 "dumb-init --single-…" 13 minutes ago Up 12 minutes (healthy) cinder_backup 2026-04-02 01:13:53.502506 | orchestrator | c315ced7b4da registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) glance_api 2026-04-02 01:13:53.502514 | orchestrator | 708fcd4939ef registry.osism.tech/kolla/cinder-volume:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_volume 2026-04-02 01:13:53.502518 | orchestrator | 4367bf2169a0 registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_scheduler 2026-04-02 01:13:53.502522 | orchestrator | 10e271b6144a registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_elasticsearch_exporter 2026-04-02 01:13:53.502530 | orchestrator | d301f42afe22 registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_api 2026-04-02 01:13:53.502537 | orchestrator | b65514547990 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_cadvisor 2026-04-02 01:13:53.502541 | orchestrator | 6ba7a08a9a03 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_memcached_exporter 2026-04-02 01:13:53.502545 | orchestrator | 3221a01834f3 registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_mysqld_exporter 2026-04-02 01:13:53.502549 | orchestrator | 3533e5c14eaf registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2026-04-02 01:13:53.502553 | orchestrator | 3c46bd70d93d registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 15 minutes ago Up 15 minutes ceph-mgr-testbed-node-0 2026-04-02 01:13:53.502557 | orchestrator | a2c66ffbc2df registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) keystone 2026-04-02 01:13:53.502560 | orchestrator | 3a6ec2c33b36 registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_fernet 2026-04-02 01:13:53.502564 | orchestrator | 3338320d054e registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_ssh 2026-04-02 01:13:53.502568 | orchestrator | 9a04aedc9907 registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) horizon 2026-04-02 01:13:53.502572 | orchestrator | 1b6dc85fcb97 registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 18 minutes ago Up 18 minutes (healthy) mariadb 2026-04-02 01:13:53.502575 | orchestrator | 5553e5794821 registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch_dashboards 2026-04-02 01:13:53.502579 | orchestrator | e07e750aec55 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch 2026-04-02 01:13:53.502583 | orchestrator | d90efb972991 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 21 minutes ago Up 21 minutes ceph-crash-testbed-node-0 2026-04-02 01:13:53.502587 | orchestrator | 0d44af84755d registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes keepalived 2026-04-02 01:13:53.502591 | orchestrator | d3cc1040d047 registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) proxysql 2026-04-02 01:13:53.502599 | orchestrator | 1e98843774f5 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) haproxy 2026-04-02 01:13:53.502603 | orchestrator | e75ab4e46eff registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes ovn_northd 2026-04-02 01:13:53.502607 | orchestrator | 069650620a99 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes ovn_sb_db 2026-04-02 01:13:53.502611 | orchestrator | f53a83c5adf3 registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes ovn_nb_db 2026-04-02 01:13:53.502618 | orchestrator | 23e11e95b6bd registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 25 minutes ago Up 25 minutes ceph-mon-testbed-node-0 2026-04-02 01:13:53.502622 | orchestrator | 08a15217177b registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_controller 2026-04-02 01:13:53.502701 | orchestrator | d11d37b7884d registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) rabbitmq 2026-04-02 01:13:53.502706 | orchestrator | 502bc7484997 registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) openvswitch_vswitchd 2026-04-02 01:13:53.502710 | orchestrator | b1a4cbaefef4 registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) openvswitch_db 2026-04-02 01:13:53.502713 | orchestrator | 40076187940d registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) redis_sentinel 2026-04-02 01:13:53.502721 | orchestrator | 2981fe2c1797 registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) redis 2026-04-02 01:13:53.502725 | orchestrator | 3a7211b113db registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) memcached 2026-04-02 01:13:53.502728 | orchestrator | bb1901d227df registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes cron 2026-04-02 01:13:53.502732 | orchestrator | fc99652f3c6b registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes kolla_toolbox 2026-04-02 01:13:53.502736 | orchestrator | a0074664010f registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes fluentd 2026-04-02 01:13:53.656429 | orchestrator | 2026-04-02 01:13:53.656512 | orchestrator | ## Images @ testbed-node-0 2026-04-02 01:13:53.656527 | orchestrator | 2026-04-02 01:13:53.656534 | orchestrator | + echo 2026-04-02 01:13:53.656541 | orchestrator | + echo '## Images @ testbed-node-0' 2026-04-02 01:13:53.656548 | orchestrator | + echo 2026-04-02 01:13:53.656554 | orchestrator | + osism container testbed-node-0 images 2026-04-02 01:13:55.121594 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-02 01:13:55.121719 | orchestrator | registry.osism.tech/osism/ceph-daemon reef 585d5843ff29 21 hours ago 1.35GB 2026-04-02 01:13:55.121727 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 ac581034c7ab 23 hours ago 590MB 2026-04-02 01:13:55.121731 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 961e8a9dee43 23 hours ago 1.04GB 2026-04-02 01:13:55.121735 | orchestrator | registry.osism.tech/kolla/cron 2024.2 7aaeb79a3a47 23 hours ago 277MB 2026-04-02 01:13:55.121740 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 b9334b632ef5 23 hours ago 277MB 2026-04-02 01:13:55.121744 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 ca79b4ef6a7a 23 hours ago 679MB 2026-04-02 01:13:55.121749 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 9d29bc358bce 23 hours ago 427MB 2026-04-02 01:13:55.121753 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 5d3988842e87 23 hours ago 287MB 2026-04-02 01:13:55.121756 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 38b759f33d5f 23 hours ago 285MB 2026-04-02 01:13:55.121760 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 8dc7b4dccf2f 23 hours ago 1.54GB 2026-04-02 01:13:55.121780 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 66d8f1b30908 23 hours ago 1.57GB 2026-04-02 01:13:55.121784 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 5ee95f7a975a 23 hours ago 333MB 2026-04-02 01:13:55.121788 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 38601877dd1c 23 hours ago 303MB 2026-04-02 01:13:55.121792 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 d96bd9d5628c 23 hours ago 309MB 2026-04-02 01:13:55.121795 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 f9e2a732d2e4 23 hours ago 317MB 2026-04-02 01:13:55.121799 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 3a732237c7b8 23 hours ago 368MB 2026-04-02 01:13:55.121815 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 61ff92f4fb36 23 hours ago 312MB 2026-04-02 01:13:55.121819 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 ad6cab768ed9 23 hours ago 463MB 2026-04-02 01:13:55.121823 | orchestrator | registry.osism.tech/kolla/redis 2024.2 7b0e1392275e 23 hours ago 284MB 2026-04-02 01:13:55.121827 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 4f9790ec9068 23 hours ago 284MB 2026-04-02 01:13:55.121831 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 1ba706d2d97c 23 hours ago 290MB 2026-04-02 01:13:55.121834 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 ac967fab9776 23 hours ago 290MB 2026-04-02 01:13:55.121838 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 5a40b103a940 23 hours ago 1.16GB 2026-04-02 01:13:55.121842 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 19cd9a9098c4 23 hours ago 851MB 2026-04-02 01:13:55.121846 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 201886795090 23 hours ago 851MB 2026-04-02 01:13:55.121849 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 5a61a374623d 23 hours ago 851MB 2026-04-02 01:13:55.121853 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 30376f4e65e5 23 hours ago 851MB 2026-04-02 01:13:55.121857 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 363b898d0c31 23 hours ago 1.25GB 2026-04-02 01:13:55.121861 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 054c8a39234a 23 hours ago 1.14GB 2026-04-02 01:13:55.121864 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 da401fce9feb 23 hours ago 1e+03MB 2026-04-02 01:13:55.121868 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 776ad9f83ec2 23 hours ago 1e+03MB 2026-04-02 01:13:55.121872 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 d26c5e071c40 23 hours ago 995MB 2026-04-02 01:13:55.121876 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 5a9960454dc2 23 hours ago 995MB 2026-04-02 01:13:55.121880 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 d46e76c7c398 23 hours ago 995MB 2026-04-02 01:13:55.121883 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 fb79f0243d23 23 hours ago 994MB 2026-04-02 01:13:55.121887 | orchestrator | registry.osism.tech/kolla/aodh-evaluator 2024.2 03e6db13a036 23 hours ago 985MB 2026-04-02 01:13:55.121908 | orchestrator | registry.osism.tech/kolla/aodh-notifier 2024.2 0a1690f0d22c 23 hours ago 985MB 2026-04-02 01:13:55.121912 | orchestrator | registry.osism.tech/kolla/aodh-api 2024.2 d792cc17ae91 23 hours ago 984MB 2026-04-02 01:13:55.121916 | orchestrator | registry.osism.tech/kolla/aodh-listener 2024.2 6f0ca69c6cec 23 hours ago 985MB 2026-04-02 01:13:55.121920 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 30bd3ccd4829 23 hours ago 1.04GB 2026-04-02 01:13:55.121929 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 1d2e1a4d6d6b 23 hours ago 1.04GB 2026-04-02 01:13:55.121933 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 0942a1c9661b 23 hours ago 1.04GB 2026-04-02 01:13:55.121936 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 94687871a9f3 23 hours ago 1.06GB 2026-04-02 01:13:55.121940 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 05babba07b90 23 hours ago 1.06GB 2026-04-02 01:13:55.121944 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 89d76cc962cd 23 hours ago 1.11GB 2026-04-02 01:13:55.121951 | orchestrator | registry.osism.tech/kolla/ceilometer-central 2024.2 c7fabfb47f38 23 hours ago 987MB 2026-04-02 01:13:55.121955 | orchestrator | registry.osism.tech/kolla/ceilometer-notification 2024.2 73a19dc0869b 23 hours ago 987MB 2026-04-02 01:13:55.121958 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 73db21b36bc0 23 hours ago 987MB 2026-04-02 01:13:55.121962 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 0884c37eca6f 23 hours ago 1.05GB 2026-04-02 01:13:55.121967 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 0baa1ee5f438 23 hours ago 1.08GB 2026-04-02 01:13:55.121972 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 5bdb6874fbba 23 hours ago 1.05GB 2026-04-02 01:13:55.121978 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 1d11cc7fabb5 23 hours ago 1.17GB 2026-04-02 01:13:55.121984 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 43cb7ddada7a 23 hours ago 1.22GB 2026-04-02 01:13:55.121990 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 0bf82df7450b 23 hours ago 1.38GB 2026-04-02 01:13:55.121996 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 057fa8118027 23 hours ago 1.22GB 2026-04-02 01:13:55.122002 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 2be830e7f95b 23 hours ago 1.22GB 2026-04-02 01:13:55.122008 | orchestrator | registry.osism.tech/kolla/cinder-backup 2024.2 6b07ddd0842c 23 hours ago 1.42GB 2026-04-02 01:13:55.122057 | orchestrator | registry.osism.tech/kolla/cinder-volume 2024.2 1fd1c45d83f3 23 hours ago 1.73GB 2026-04-02 01:13:55.122063 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 f54c152f0800 23 hours ago 1.42GB 2026-04-02 01:13:55.122069 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 7c6d8ae2e64f 23 hours ago 1.42GB 2026-04-02 01:13:55.122075 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 1d53eb9293ad 23 hours ago 1GB 2026-04-02 01:13:55.122081 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 9a3528aec53a 23 hours ago 1GB 2026-04-02 01:13:55.122087 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 dcc913a4e808 23 hours ago 1GB 2026-04-02 01:13:55.122093 | orchestrator | registry.osism.tech/kolla/skyline-apiserver 2024.2 6e981e4a1180 23 hours ago 1GB 2026-04-02 01:13:55.122098 | orchestrator | registry.osism.tech/kolla/skyline-console 2024.2 f18ed1b361a6 23 hours ago 1.06GB 2026-04-02 01:13:55.269839 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-02 01:13:55.270838 | orchestrator | ++ semver latest 5.0.0 2026-04-02 01:13:55.323815 | orchestrator | 2026-04-02 01:13:55.323907 | orchestrator | ## Containers @ testbed-node-1 2026-04-02 01:13:55.323916 | orchestrator | 2026-04-02 01:13:55.323932 | orchestrator | + [[ -1 -eq -1 ]] 2026-04-02 01:13:55.323940 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-02 01:13:55.323946 | orchestrator | + echo 2026-04-02 01:13:55.323953 | orchestrator | + echo '## Containers @ testbed-node-1' 2026-04-02 01:13:55.323960 | orchestrator | + echo 2026-04-02 01:13:55.323967 | orchestrator | + osism container testbed-node-1 ps 2026-04-02 01:13:56.708735 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-02 01:13:56.708816 | orchestrator | 75bd522215b5 registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2026-04-02 01:13:56.708825 | orchestrator | 2bfc4be898c8 registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2026-04-02 01:13:56.708830 | orchestrator | baa216a2cbd9 registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2026-04-02 01:13:56.708834 | orchestrator | 01afb53cbee8 registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2026-04-02 01:13:56.708838 | orchestrator | f0d4b31e1c58 registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_api 2026-04-02 01:13:56.708864 | orchestrator | cabf53cc81e6 registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2026-04-02 01:13:56.708875 | orchestrator | 0d5986056f49 registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_conductor 2026-04-02 01:13:56.708879 | orchestrator | e1471ac66748 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_api 2026-04-02 01:13:56.708886 | orchestrator | b1baeaf55d35 registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2026-04-02 01:13:56.708889 | orchestrator | 048d1ead7f46 registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_novncproxy 2026-04-02 01:13:56.708893 | orchestrator | 3343bbfa5275 registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_conductor 2026-04-02 01:13:56.708897 | orchestrator | 125319c51069 registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 10 minutes ago Up 9 minutes (healthy) designate_worker 2026-04-02 01:13:56.708902 | orchestrator | f8e024aa3b25 registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) neutron_server 2026-04-02 01:13:56.708906 | orchestrator | ce59bac8d2b7 registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_mdns 2026-04-02 01:13:56.708910 | orchestrator | acb635a2d362 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_producer 2026-04-02 01:13:56.708913 | orchestrator | 7f6d88bc03b0 registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_central 2026-04-02 01:13:56.708917 | orchestrator | 85ab9c8483e6 registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_api 2026-04-02 01:13:56.708921 | orchestrator | 42425bea5246 registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_backend_bind9 2026-04-02 01:13:56.708925 | orchestrator | deae6d1849d0 registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_worker 2026-04-02 01:13:56.708941 | orchestrator | aff6ebec4e84 registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) nova_api 2026-04-02 01:13:56.708946 | orchestrator | 176e66293076 registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_keystone_listener 2026-04-02 01:13:56.708960 | orchestrator | a17144cfae83 registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_api 2026-04-02 01:13:56.708965 | orchestrator | 6acc791741ca registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 11 minutes ago Up 9 minutes (healthy) nova_scheduler 2026-04-02 01:13:56.708969 | orchestrator | 4b6255ed0f78 registry.osism.tech/kolla/cinder-backup:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_backup 2026-04-02 01:13:56.708972 | orchestrator | 4857454340d6 registry.osism.tech/kolla/cinder-volume:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_volume 2026-04-02 01:13:56.708976 | orchestrator | cb0ea50da5a1 registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) glance_api 2026-04-02 01:13:56.708980 | orchestrator | 21b20e112585 registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_scheduler 2026-04-02 01:13:56.708988 | orchestrator | e75d05b369c3 registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_api 2026-04-02 01:13:56.708992 | orchestrator | 93baf2060194 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_elasticsearch_exporter 2026-04-02 01:13:56.708996 | orchestrator | a72741499f12 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_cadvisor 2026-04-02 01:13:56.709000 | orchestrator | 80d814b77ea2 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_memcached_exporter 2026-04-02 01:13:56.709004 | orchestrator | 252b94488a5e registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_mysqld_exporter 2026-04-02 01:13:56.709008 | orchestrator | b9ac947c8c98 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2026-04-02 01:13:56.709012 | orchestrator | 567cfe917c5d registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 15 minutes ago Up 15 minutes ceph-mgr-testbed-node-1 2026-04-02 01:13:56.709016 | orchestrator | 1c3854642c4d registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) keystone 2026-04-02 01:13:56.709019 | orchestrator | dcecfe542316 registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) horizon 2026-04-02 01:13:56.709023 | orchestrator | 072c1a5efa7c registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_fernet 2026-04-02 01:13:56.709027 | orchestrator | 9a369693f7ab registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_ssh 2026-04-02 01:13:56.709031 | orchestrator | ba3c167a2d16 registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) opensearch_dashboards 2026-04-02 01:13:56.709038 | orchestrator | cf11d88e8ccc registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 20 minutes ago Up 20 minutes (healthy) mariadb 2026-04-02 01:13:56.709042 | orchestrator | 63203f9f85ab registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch 2026-04-02 01:13:56.709046 | orchestrator | 42c2bba21455 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 21 minutes ago Up 21 minutes ceph-crash-testbed-node-1 2026-04-02 01:13:56.709049 | orchestrator | 446e24e81b68 registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes keepalived 2026-04-02 01:13:56.709053 | orchestrator | 6aca4ea3f86b registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) proxysql 2026-04-02 01:13:56.709061 | orchestrator | 869a50d289b7 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) haproxy 2026-04-02 01:13:56.709065 | orchestrator | fef70b70815e registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes ovn_northd 2026-04-02 01:13:56.709069 | orchestrator | b219aa2e8387 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes ovn_sb_db 2026-04-02 01:13:56.709073 | orchestrator | b0f5d7a8bcb9 registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 25 minutes ago Up 24 minutes ovn_nb_db 2026-04-02 01:13:56.709077 | orchestrator | 0015362eaa5c registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_controller 2026-04-02 01:13:56.709080 | orchestrator | aea499737908 registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) rabbitmq 2026-04-02 01:13:56.709084 | orchestrator | 64f3307d8d6a registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 25 minutes ago Up 25 minutes ceph-mon-testbed-node-1 2026-04-02 01:13:56.709088 | orchestrator | 268b3918f0c4 registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 27 minutes ago Up 26 minutes (healthy) openvswitch_vswitchd 2026-04-02 01:13:56.709094 | orchestrator | b57b040f6938 registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) openvswitch_db 2026-04-02 01:13:56.709098 | orchestrator | a4f8b177519c registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) redis_sentinel 2026-04-02 01:13:56.709102 | orchestrator | 08738b6f5779 registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) redis 2026-04-02 01:13:56.709106 | orchestrator | ef8401e8caad registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) memcached 2026-04-02 01:13:56.709110 | orchestrator | afe993f66f79 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes cron 2026-04-02 01:13:56.709114 | orchestrator | 42a233a631f8 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes kolla_toolbox 2026-04-02 01:13:56.709121 | orchestrator | 578c3689c26f registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes fluentd 2026-04-02 01:13:56.853028 | orchestrator | 2026-04-02 01:13:56.853099 | orchestrator | ## Images @ testbed-node-1 2026-04-02 01:13:56.853106 | orchestrator | 2026-04-02 01:13:56.853111 | orchestrator | + echo 2026-04-02 01:13:56.853116 | orchestrator | + echo '## Images @ testbed-node-1' 2026-04-02 01:13:56.853121 | orchestrator | + echo 2026-04-02 01:13:56.853125 | orchestrator | + osism container testbed-node-1 images 2026-04-02 01:13:58.306253 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-02 01:13:58.306325 | orchestrator | registry.osism.tech/osism/ceph-daemon reef 585d5843ff29 21 hours ago 1.35GB 2026-04-02 01:13:58.306331 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 ac581034c7ab 23 hours ago 590MB 2026-04-02 01:13:58.306335 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 961e8a9dee43 23 hours ago 1.04GB 2026-04-02 01:13:58.306340 | orchestrator | registry.osism.tech/kolla/cron 2024.2 7aaeb79a3a47 23 hours ago 277MB 2026-04-02 01:13:58.306344 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 b9334b632ef5 23 hours ago 277MB 2026-04-02 01:13:58.306348 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 ca79b4ef6a7a 23 hours ago 679MB 2026-04-02 01:13:58.306352 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 9d29bc358bce 23 hours ago 427MB 2026-04-02 01:13:58.306355 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 5d3988842e87 23 hours ago 287MB 2026-04-02 01:13:58.306359 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 38b759f33d5f 23 hours ago 285MB 2026-04-02 01:13:58.306363 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 8dc7b4dccf2f 23 hours ago 1.54GB 2026-04-02 01:13:58.306366 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 66d8f1b30908 23 hours ago 1.57GB 2026-04-02 01:13:58.306370 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 5ee95f7a975a 23 hours ago 333MB 2026-04-02 01:13:58.306374 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 38601877dd1c 23 hours ago 303MB 2026-04-02 01:13:58.306378 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 d96bd9d5628c 23 hours ago 309MB 2026-04-02 01:13:58.306382 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 f9e2a732d2e4 23 hours ago 317MB 2026-04-02 01:13:58.306386 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 3a732237c7b8 23 hours ago 368MB 2026-04-02 01:13:58.306389 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 61ff92f4fb36 23 hours ago 312MB 2026-04-02 01:13:58.306393 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 ad6cab768ed9 23 hours ago 463MB 2026-04-02 01:13:58.306397 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 4f9790ec9068 23 hours ago 284MB 2026-04-02 01:13:58.306401 | orchestrator | registry.osism.tech/kolla/redis 2024.2 7b0e1392275e 23 hours ago 284MB 2026-04-02 01:13:58.306404 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 1ba706d2d97c 23 hours ago 290MB 2026-04-02 01:13:58.306408 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 ac967fab9776 23 hours ago 290MB 2026-04-02 01:13:58.306412 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 5a40b103a940 23 hours ago 1.16GB 2026-04-02 01:13:58.306415 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 19cd9a9098c4 23 hours ago 851MB 2026-04-02 01:13:58.306419 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 201886795090 23 hours ago 851MB 2026-04-02 01:13:58.306439 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 30376f4e65e5 23 hours ago 851MB 2026-04-02 01:13:58.306443 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 5a61a374623d 23 hours ago 851MB 2026-04-02 01:13:58.306446 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 363b898d0c31 23 hours ago 1.25GB 2026-04-02 01:13:58.306450 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 054c8a39234a 23 hours ago 1.14GB 2026-04-02 01:13:58.306454 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 da401fce9feb 23 hours ago 1e+03MB 2026-04-02 01:13:58.306458 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 776ad9f83ec2 23 hours ago 1e+03MB 2026-04-02 01:13:58.306462 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 d26c5e071c40 23 hours ago 995MB 2026-04-02 01:13:58.306466 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 5a9960454dc2 23 hours ago 995MB 2026-04-02 01:13:58.306481 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 d46e76c7c398 23 hours ago 995MB 2026-04-02 01:13:58.306485 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 fb79f0243d23 23 hours ago 994MB 2026-04-02 01:13:58.306488 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 30bd3ccd4829 23 hours ago 1.04GB 2026-04-02 01:13:58.306502 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 1d2e1a4d6d6b 23 hours ago 1.04GB 2026-04-02 01:13:58.306506 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 0942a1c9661b 23 hours ago 1.04GB 2026-04-02 01:13:58.306509 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 94687871a9f3 23 hours ago 1.06GB 2026-04-02 01:13:58.306513 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 05babba07b90 23 hours ago 1.06GB 2026-04-02 01:13:58.306517 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 89d76cc962cd 23 hours ago 1.11GB 2026-04-02 01:13:58.306520 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 73db21b36bc0 23 hours ago 987MB 2026-04-02 01:13:58.306524 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 0884c37eca6f 23 hours ago 1.05GB 2026-04-02 01:13:58.306528 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 0baa1ee5f438 23 hours ago 1.08GB 2026-04-02 01:13:58.306531 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 5bdb6874fbba 23 hours ago 1.05GB 2026-04-02 01:13:58.306535 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 1d11cc7fabb5 23 hours ago 1.17GB 2026-04-02 01:13:58.306539 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 43cb7ddada7a 23 hours ago 1.22GB 2026-04-02 01:13:58.306542 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 0bf82df7450b 23 hours ago 1.38GB 2026-04-02 01:13:58.306546 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 057fa8118027 23 hours ago 1.22GB 2026-04-02 01:13:58.306550 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 2be830e7f95b 23 hours ago 1.22GB 2026-04-02 01:13:58.306555 | orchestrator | registry.osism.tech/kolla/cinder-backup 2024.2 6b07ddd0842c 23 hours ago 1.42GB 2026-04-02 01:13:58.306562 | orchestrator | registry.osism.tech/kolla/cinder-volume 2024.2 1fd1c45d83f3 23 hours ago 1.73GB 2026-04-02 01:13:58.306568 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 f54c152f0800 23 hours ago 1.42GB 2026-04-02 01:13:58.306573 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 7c6d8ae2e64f 23 hours ago 1.42GB 2026-04-02 01:13:58.306579 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 1d53eb9293ad 23 hours ago 1GB 2026-04-02 01:13:58.306591 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 9a3528aec53a 23 hours ago 1GB 2026-04-02 01:13:58.306597 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 dcc913a4e808 23 hours ago 1GB 2026-04-02 01:13:58.451611 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-02 01:13:58.452854 | orchestrator | ++ semver latest 5.0.0 2026-04-02 01:13:58.507041 | orchestrator | 2026-04-02 01:13:58.507118 | orchestrator | ## Containers @ testbed-node-2 2026-04-02 01:13:58.507126 | orchestrator | 2026-04-02 01:13:58.507132 | orchestrator | + [[ -1 -eq -1 ]] 2026-04-02 01:13:58.507141 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-02 01:13:58.507150 | orchestrator | + echo 2026-04-02 01:13:58.507159 | orchestrator | + echo '## Containers @ testbed-node-2' 2026-04-02 01:13:58.507169 | orchestrator | + echo 2026-04-02 01:13:58.507177 | orchestrator | + osism container testbed-node-2 ps 2026-04-02 01:13:59.964487 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-02 01:13:59.964573 | orchestrator | 72988fa2b82c registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2026-04-02 01:13:59.964581 | orchestrator | 7858a8df2649 registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2026-04-02 01:13:59.964589 | orchestrator | b07e647377bd registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2026-04-02 01:13:59.965362 | orchestrator | c5af41105a4b registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2026-04-02 01:13:59.965400 | orchestrator | 46e9bce5f012 registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_api 2026-04-02 01:13:59.965411 | orchestrator | 5af74168e060 registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2026-04-02 01:13:59.965418 | orchestrator | 45f74ffaf9c6 registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_conductor 2026-04-02 01:13:59.965425 | orchestrator | d9276412fcf9 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_api 2026-04-02 01:13:59.965431 | orchestrator | 002cb3a02119 registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2026-04-02 01:13:59.965438 | orchestrator | 48f2a664f9c7 registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_novncproxy 2026-04-02 01:13:59.965445 | orchestrator | 94a177a422bc registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_conductor 2026-04-02 01:13:59.965451 | orchestrator | a0f75e109861 registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_worker 2026-04-02 01:13:59.965458 | orchestrator | 2903e0a9d5d0 registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) neutron_server 2026-04-02 01:13:59.965465 | orchestrator | 58eedf399a6a registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_mdns 2026-04-02 01:13:59.965492 | orchestrator | bb44ed17ccf7 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_producer 2026-04-02 01:13:59.965515 | orchestrator | acf705ebddd4 registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_central 2026-04-02 01:13:59.965519 | orchestrator | 65427e94f0ed registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_api 2026-04-02 01:13:59.965523 | orchestrator | 1c0e0cc45a4a registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_backend_bind9 2026-04-02 01:13:59.965527 | orchestrator | 54479984008b registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_worker 2026-04-02 01:13:59.965530 | orchestrator | 85f6755fba95 registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) nova_api 2026-04-02 01:13:59.965534 | orchestrator | 3184c8ba63b2 registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_keystone_listener 2026-04-02 01:13:59.965556 | orchestrator | 989f82626a14 registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_api 2026-04-02 01:13:59.965560 | orchestrator | 79765c277807 registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 11 minutes ago Up 9 minutes (healthy) nova_scheduler 2026-04-02 01:13:59.965564 | orchestrator | 931606e5741c registry.osism.tech/kolla/cinder-backup:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_backup 2026-04-02 01:13:59.965568 | orchestrator | fa48cbd94dbc registry.osism.tech/kolla/cinder-volume:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_volume 2026-04-02 01:13:59.965572 | orchestrator | bf09415666cd registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) glance_api 2026-04-02 01:13:59.965575 | orchestrator | 7749c9d5e6e2 registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_scheduler 2026-04-02 01:13:59.965579 | orchestrator | 5c2498c788cf registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_api 2026-04-02 01:13:59.965583 | orchestrator | 178231ea19c0 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_elasticsearch_exporter 2026-04-02 01:13:59.965588 | orchestrator | 05d810474066 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_cadvisor 2026-04-02 01:13:59.965592 | orchestrator | 029a9ff9bf2c registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_memcached_exporter 2026-04-02 01:13:59.965596 | orchestrator | 09f7eca62e34 registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_mysqld_exporter 2026-04-02 01:13:59.965599 | orchestrator | 10ab1e5b056e registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2026-04-02 01:13:59.965603 | orchestrator | dbc02fb89792 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 15 minutes ago Up 15 minutes ceph-mgr-testbed-node-2 2026-04-02 01:13:59.965612 | orchestrator | 727e2d65a79c registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 17 minutes ago Up 16 minutes (healthy) keystone 2026-04-02 01:13:59.965665 | orchestrator | ac5aed685b9c registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) horizon 2026-04-02 01:13:59.965669 | orchestrator | c0e2f08e12ea registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_fernet 2026-04-02 01:13:59.965673 | orchestrator | fc98596550d8 registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_ssh 2026-04-02 01:13:59.965677 | orchestrator | 6a95c253ded5 registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) opensearch_dashboards 2026-04-02 01:13:59.965681 | orchestrator | 778ba8a07aca registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 19 minutes ago Up 19 minutes (healthy) mariadb 2026-04-02 01:13:59.965684 | orchestrator | e0ba5b657333 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch 2026-04-02 01:13:59.965688 | orchestrator | b35599218326 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 21 minutes ago Up 21 minutes ceph-crash-testbed-node-2 2026-04-02 01:13:59.965692 | orchestrator | 6707db00fad1 registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes keepalived 2026-04-02 01:13:59.965696 | orchestrator | 90be3c615ba3 registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) proxysql 2026-04-02 01:13:59.965704 | orchestrator | eafee85b5756 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) haproxy 2026-04-02 01:13:59.965708 | orchestrator | 75933e81ab29 registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes ovn_northd 2026-04-02 01:13:59.965712 | orchestrator | e6dddf6ed318 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes ovn_sb_db 2026-04-02 01:13:59.965716 | orchestrator | 4b346fefb119 registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 25 minutes ago Up 24 minutes ovn_nb_db 2026-04-02 01:13:59.965720 | orchestrator | 2f917a53e4d7 registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) rabbitmq 2026-04-02 01:13:59.965727 | orchestrator | 770adf6bf600 registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_controller 2026-04-02 01:13:59.965731 | orchestrator | 0b9419eb739e registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 25 minutes ago Up 25 minutes ceph-mon-testbed-node-2 2026-04-02 01:13:59.965735 | orchestrator | ea37fc43f740 registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 27 minutes ago Up 26 minutes (healthy) openvswitch_vswitchd 2026-04-02 01:13:59.965739 | orchestrator | 1c6c77e60af9 registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) openvswitch_db 2026-04-02 01:13:59.965743 | orchestrator | a510d836d767 registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) redis_sentinel 2026-04-02 01:13:59.965750 | orchestrator | 83342f451802 registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) redis 2026-04-02 01:13:59.965754 | orchestrator | bae42b659ad0 registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) memcached 2026-04-02 01:13:59.965758 | orchestrator | 4391b3233d83 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes cron 2026-04-02 01:13:59.965762 | orchestrator | ddb57c87f7d9 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes kolla_toolbox 2026-04-02 01:13:59.965766 | orchestrator | af158dcd7666 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes fluentd 2026-04-02 01:14:00.112938 | orchestrator | 2026-04-02 01:14:00.113024 | orchestrator | ## Images @ testbed-node-2 2026-04-02 01:14:00.113033 | orchestrator | 2026-04-02 01:14:00.113041 | orchestrator | + echo 2026-04-02 01:14:00.113048 | orchestrator | + echo '## Images @ testbed-node-2' 2026-04-02 01:14:00.113055 | orchestrator | + echo 2026-04-02 01:14:00.113070 | orchestrator | + osism container testbed-node-2 images 2026-04-02 01:14:01.547780 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-02 01:14:01.547862 | orchestrator | registry.osism.tech/osism/ceph-daemon reef 585d5843ff29 21 hours ago 1.35GB 2026-04-02 01:14:01.547872 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 ac581034c7ab 23 hours ago 590MB 2026-04-02 01:14:01.547888 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 961e8a9dee43 23 hours ago 1.04GB 2026-04-02 01:14:01.547895 | orchestrator | registry.osism.tech/kolla/cron 2024.2 7aaeb79a3a47 23 hours ago 277MB 2026-04-02 01:14:01.547901 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 b9334b632ef5 23 hours ago 277MB 2026-04-02 01:14:01.547907 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 ca79b4ef6a7a 23 hours ago 679MB 2026-04-02 01:14:01.547914 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 9d29bc358bce 23 hours ago 427MB 2026-04-02 01:14:01.547920 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 5d3988842e87 23 hours ago 287MB 2026-04-02 01:14:01.547926 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 38b759f33d5f 23 hours ago 285MB 2026-04-02 01:14:01.547932 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 8dc7b4dccf2f 23 hours ago 1.54GB 2026-04-02 01:14:01.547939 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 66d8f1b30908 23 hours ago 1.57GB 2026-04-02 01:14:01.547945 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 5ee95f7a975a 23 hours ago 333MB 2026-04-02 01:14:01.547951 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 38601877dd1c 23 hours ago 303MB 2026-04-02 01:14:01.547957 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 d96bd9d5628c 23 hours ago 309MB 2026-04-02 01:14:01.547964 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 f9e2a732d2e4 23 hours ago 317MB 2026-04-02 01:14:01.547970 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 3a732237c7b8 23 hours ago 368MB 2026-04-02 01:14:01.547976 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 61ff92f4fb36 23 hours ago 312MB 2026-04-02 01:14:01.547982 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 ad6cab768ed9 23 hours ago 463MB 2026-04-02 01:14:01.547988 | orchestrator | registry.osism.tech/kolla/redis 2024.2 7b0e1392275e 23 hours ago 284MB 2026-04-02 01:14:01.548010 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 4f9790ec9068 23 hours ago 284MB 2026-04-02 01:14:01.548018 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 1ba706d2d97c 23 hours ago 290MB 2026-04-02 01:14:01.548030 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 ac967fab9776 23 hours ago 290MB 2026-04-02 01:14:01.548047 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 5a40b103a940 23 hours ago 1.16GB 2026-04-02 01:14:01.548058 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 19cd9a9098c4 23 hours ago 851MB 2026-04-02 01:14:01.548067 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 201886795090 23 hours ago 851MB 2026-04-02 01:14:01.548077 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 5a61a374623d 23 hours ago 851MB 2026-04-02 01:14:01.548087 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 30376f4e65e5 23 hours ago 851MB 2026-04-02 01:14:01.548096 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 363b898d0c31 23 hours ago 1.25GB 2026-04-02 01:14:01.548106 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 054c8a39234a 23 hours ago 1.14GB 2026-04-02 01:14:01.548116 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 da401fce9feb 23 hours ago 1e+03MB 2026-04-02 01:14:01.548125 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 776ad9f83ec2 23 hours ago 1e+03MB 2026-04-02 01:14:01.548135 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 d26c5e071c40 23 hours ago 995MB 2026-04-02 01:14:01.548145 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 5a9960454dc2 23 hours ago 995MB 2026-04-02 01:14:01.548156 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 d46e76c7c398 23 hours ago 995MB 2026-04-02 01:14:01.548167 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 fb79f0243d23 23 hours ago 994MB 2026-04-02 01:14:01.548178 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 30bd3ccd4829 23 hours ago 1.04GB 2026-04-02 01:14:01.548222 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 1d2e1a4d6d6b 23 hours ago 1.04GB 2026-04-02 01:14:01.548235 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 0942a1c9661b 23 hours ago 1.04GB 2026-04-02 01:14:01.548246 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 94687871a9f3 23 hours ago 1.06GB 2026-04-02 01:14:01.548257 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 05babba07b90 23 hours ago 1.06GB 2026-04-02 01:14:01.548268 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 89d76cc962cd 23 hours ago 1.11GB 2026-04-02 01:14:01.548279 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 73db21b36bc0 23 hours ago 987MB 2026-04-02 01:14:01.548289 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 0884c37eca6f 23 hours ago 1.05GB 2026-04-02 01:14:01.548297 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 0baa1ee5f438 23 hours ago 1.08GB 2026-04-02 01:14:01.548304 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 5bdb6874fbba 23 hours ago 1.05GB 2026-04-02 01:14:01.548310 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 1d11cc7fabb5 23 hours ago 1.17GB 2026-04-02 01:14:01.548316 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 43cb7ddada7a 23 hours ago 1.22GB 2026-04-02 01:14:01.548322 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 0bf82df7450b 23 hours ago 1.38GB 2026-04-02 01:14:01.548335 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 057fa8118027 23 hours ago 1.22GB 2026-04-02 01:14:01.548349 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 2be830e7f95b 23 hours ago 1.22GB 2026-04-02 01:14:01.548356 | orchestrator | registry.osism.tech/kolla/cinder-backup 2024.2 6b07ddd0842c 23 hours ago 1.42GB 2026-04-02 01:14:01.548364 | orchestrator | registry.osism.tech/kolla/cinder-volume 2024.2 1fd1c45d83f3 23 hours ago 1.73GB 2026-04-02 01:14:01.548371 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 f54c152f0800 23 hours ago 1.42GB 2026-04-02 01:14:01.548379 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 7c6d8ae2e64f 23 hours ago 1.42GB 2026-04-02 01:14:01.548386 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 1d53eb9293ad 23 hours ago 1GB 2026-04-02 01:14:01.548393 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 9a3528aec53a 23 hours ago 1GB 2026-04-02 01:14:01.548401 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 dcc913a4e808 23 hours ago 1GB 2026-04-02 01:14:01.694706 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2026-04-02 01:14:01.702163 | orchestrator | + set -e 2026-04-02 01:14:01.702224 | orchestrator | + source /opt/manager-vars.sh 2026-04-02 01:14:01.703077 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-02 01:14:01.703112 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-02 01:14:01.703126 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-02 01:14:01.703137 | orchestrator | ++ CEPH_VERSION=reef 2026-04-02 01:14:01.703148 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-02 01:14:01.703160 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-02 01:14:01.703307 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-02 01:14:01.703324 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-02 01:14:01.703330 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-02 01:14:01.703337 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-02 01:14:01.703343 | orchestrator | ++ export ARA=false 2026-04-02 01:14:01.703349 | orchestrator | ++ ARA=false 2026-04-02 01:14:01.703361 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-02 01:14:01.703371 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-02 01:14:01.703382 | orchestrator | ++ export TEMPEST=true 2026-04-02 01:14:01.703392 | orchestrator | ++ TEMPEST=true 2026-04-02 01:14:01.703403 | orchestrator | ++ export IS_ZUUL=true 2026-04-02 01:14:01.703413 | orchestrator | ++ IS_ZUUL=true 2026-04-02 01:14:01.703424 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.251 2026-04-02 01:14:01.703434 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.251 2026-04-02 01:14:01.703444 | orchestrator | ++ export EXTERNAL_API=false 2026-04-02 01:14:01.703453 | orchestrator | ++ EXTERNAL_API=false 2026-04-02 01:14:01.703463 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-02 01:14:01.703474 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-02 01:14:01.703485 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-02 01:14:01.703496 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-02 01:14:01.703506 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-02 01:14:01.703516 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-02 01:14:01.703527 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-02 01:14:01.703538 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2026-04-02 01:14:01.710077 | orchestrator | + set -e 2026-04-02 01:14:01.710154 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-02 01:14:01.710170 | orchestrator | ++ export INTERACTIVE=false 2026-04-02 01:14:01.710182 | orchestrator | ++ INTERACTIVE=false 2026-04-02 01:14:01.710194 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-02 01:14:01.710204 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-02 01:14:01.710213 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-04-02 01:14:01.711281 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-04-02 01:14:01.716228 | orchestrator | 2026-04-02 01:14:01.716303 | orchestrator | # Ceph status 2026-04-02 01:14:01.716313 | orchestrator | 2026-04-02 01:14:01.716320 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-02 01:14:01.716329 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-02 01:14:01.716337 | orchestrator | + echo 2026-04-02 01:14:01.716345 | orchestrator | + echo '# Ceph status' 2026-04-02 01:14:01.716352 | orchestrator | + echo 2026-04-02 01:14:01.716360 | orchestrator | + ceph -s 2026-04-02 01:14:02.256099 | orchestrator | cluster: 2026-04-02 01:14:02.256205 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2026-04-02 01:14:02.256221 | orchestrator | health: HEALTH_OK 2026-04-02 01:14:02.256233 | orchestrator | 2026-04-02 01:14:02.256244 | orchestrator | services: 2026-04-02 01:14:02.256254 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 25m) 2026-04-02 01:14:02.256264 | orchestrator | mgr: testbed-node-0(active, since 15m), standbys: testbed-node-1, testbed-node-2 2026-04-02 01:14:02.256275 | orchestrator | mds: 1/1 daemons up, 2 standby 2026-04-02 01:14:02.256285 | orchestrator | osd: 6 osds: 6 up (since 22m), 6 in (since 23m) 2026-04-02 01:14:02.256295 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2026-04-02 01:14:02.256305 | orchestrator | 2026-04-02 01:14:02.256315 | orchestrator | data: 2026-04-02 01:14:02.256325 | orchestrator | volumes: 1/1 healthy 2026-04-02 01:14:02.256335 | orchestrator | pools: 14 pools, 401 pgs 2026-04-02 01:14:02.256344 | orchestrator | objects: 555 objects, 2.2 GiB 2026-04-02 01:14:02.256354 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2026-04-02 01:14:02.256364 | orchestrator | pgs: 401 active+clean 2026-04-02 01:14:02.256374 | orchestrator | 2026-04-02 01:14:02.256383 | orchestrator | io: 2026-04-02 01:14:02.256393 | orchestrator | client: 38 KiB/s rd, 0 B/s wr, 38 op/s rd, 25 op/s wr 2026-04-02 01:14:02.256403 | orchestrator | 2026-04-02 01:14:02.306212 | orchestrator | 2026-04-02 01:14:02.306281 | orchestrator | # Ceph versions 2026-04-02 01:14:02.306294 | orchestrator | 2026-04-02 01:14:02.306303 | orchestrator | + echo 2026-04-02 01:14:02.306313 | orchestrator | + echo '# Ceph versions' 2026-04-02 01:14:02.306322 | orchestrator | + echo 2026-04-02 01:14:02.306331 | orchestrator | + ceph versions 2026-04-02 01:14:02.875239 | orchestrator | { 2026-04-02 01:14:02.875331 | orchestrator | "mon": { 2026-04-02 01:14:02.875341 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 3 2026-04-02 01:14:02.875348 | orchestrator | }, 2026-04-02 01:14:02.875355 | orchestrator | "mgr": { 2026-04-02 01:14:02.875381 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 3 2026-04-02 01:14:02.875388 | orchestrator | }, 2026-04-02 01:14:02.875395 | orchestrator | "osd": { 2026-04-02 01:14:02.875401 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 6 2026-04-02 01:14:02.875408 | orchestrator | }, 2026-04-02 01:14:02.875414 | orchestrator | "mds": { 2026-04-02 01:14:02.875421 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 3 2026-04-02 01:14:02.875425 | orchestrator | }, 2026-04-02 01:14:02.875429 | orchestrator | "rgw": { 2026-04-02 01:14:02.875433 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 3 2026-04-02 01:14:02.875436 | orchestrator | }, 2026-04-02 01:14:02.875440 | orchestrator | "overall": { 2026-04-02 01:14:02.875445 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 18 2026-04-02 01:14:02.875448 | orchestrator | } 2026-04-02 01:14:02.875452 | orchestrator | } 2026-04-02 01:14:02.919726 | orchestrator | 2026-04-02 01:14:02.919815 | orchestrator | # Ceph OSD tree 2026-04-02 01:14:02.919825 | orchestrator | 2026-04-02 01:14:02.919832 | orchestrator | + echo 2026-04-02 01:14:02.919838 | orchestrator | + echo '# Ceph OSD tree' 2026-04-02 01:14:02.919845 | orchestrator | + echo 2026-04-02 01:14:02.919852 | orchestrator | + ceph osd df tree 2026-04-02 01:14:03.442333 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2026-04-02 01:14:03.442436 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 417 MiB 113 GiB 5.91 1.00 - root default 2026-04-02 01:14:03.442447 | orchestrator | -3 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 139 MiB 38 GiB 5.91 1.00 - host testbed-node-3 2026-04-02 01:14:03.442454 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.5 GiB 1.4 GiB 1 KiB 70 MiB 18 GiB 7.47 1.26 201 up osd.0 2026-04-02 01:14:03.442460 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 888 MiB 819 MiB 1 KiB 70 MiB 19 GiB 4.34 0.74 189 up osd.5 2026-04-02 01:14:03.442467 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 139 MiB 38 GiB 5.91 1.00 - host testbed-node-4 2026-04-02 01:14:03.442474 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.1 GiB 1 KiB 70 MiB 19 GiB 5.64 0.95 190 up osd.1 2026-04-02 01:14:03.442507 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 1.2 GiB 1.2 GiB 1 KiB 70 MiB 19 GiB 6.18 1.05 202 up osd.4 2026-04-02 01:14:03.442513 | orchestrator | -7 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 139 MiB 38 GiB 5.91 1.00 - host testbed-node-5 2026-04-02 01:14:03.442519 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.4 GiB 1 KiB 70 MiB 19 GiB 7.20 1.22 188 up osd.2 2026-04-02 01:14:03.442525 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 944 MiB 875 MiB 1 KiB 70 MiB 19 GiB 4.62 0.78 200 up osd.3 2026-04-02 01:14:03.442532 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 417 MiB 113 GiB 5.91 2026-04-02 01:14:03.442538 | orchestrator | MIN/MAX VAR: 0.74/1.26 STDDEV: 1.18 2026-04-02 01:14:03.484738 | orchestrator | 2026-04-02 01:14:03.484814 | orchestrator | # Ceph monitor status 2026-04-02 01:14:03.484821 | orchestrator | 2026-04-02 01:14:03.484826 | orchestrator | + echo 2026-04-02 01:14:03.484830 | orchestrator | + echo '# Ceph monitor status' 2026-04-02 01:14:03.484835 | orchestrator | + echo 2026-04-02 01:14:03.484839 | orchestrator | + ceph mon stat 2026-04-02 01:14:04.060475 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 8, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2026-04-02 01:14:04.103030 | orchestrator | 2026-04-02 01:14:04.103132 | orchestrator | # Ceph quorum status 2026-04-02 01:14:04.103143 | orchestrator | 2026-04-02 01:14:04.103150 | orchestrator | + echo 2026-04-02 01:14:04.103157 | orchestrator | + echo '# Ceph quorum status' 2026-04-02 01:14:04.103163 | orchestrator | + echo 2026-04-02 01:14:04.103207 | orchestrator | + ceph quorum_status 2026-04-02 01:14:04.104533 | orchestrator | + jq 2026-04-02 01:14:04.715126 | orchestrator | { 2026-04-02 01:14:04.715222 | orchestrator | "election_epoch": 8, 2026-04-02 01:14:04.715233 | orchestrator | "quorum": [ 2026-04-02 01:14:04.715241 | orchestrator | 0, 2026-04-02 01:14:04.715247 | orchestrator | 1, 2026-04-02 01:14:04.715254 | orchestrator | 2 2026-04-02 01:14:04.715261 | orchestrator | ], 2026-04-02 01:14:04.715268 | orchestrator | "quorum_names": [ 2026-04-02 01:14:04.715274 | orchestrator | "testbed-node-0", 2026-04-02 01:14:04.715280 | orchestrator | "testbed-node-1", 2026-04-02 01:14:04.715285 | orchestrator | "testbed-node-2" 2026-04-02 01:14:04.715291 | orchestrator | ], 2026-04-02 01:14:04.715297 | orchestrator | "quorum_leader_name": "testbed-node-0", 2026-04-02 01:14:04.715304 | orchestrator | "quorum_age": 1537, 2026-04-02 01:14:04.715310 | orchestrator | "features": { 2026-04-02 01:14:04.715318 | orchestrator | "quorum_con": "4540138322906710015", 2026-04-02 01:14:04.715325 | orchestrator | "quorum_mon": [ 2026-04-02 01:14:04.715332 | orchestrator | "kraken", 2026-04-02 01:14:04.715338 | orchestrator | "luminous", 2026-04-02 01:14:04.715345 | orchestrator | "mimic", 2026-04-02 01:14:04.715352 | orchestrator | "osdmap-prune", 2026-04-02 01:14:04.715359 | orchestrator | "nautilus", 2026-04-02 01:14:04.715366 | orchestrator | "octopus", 2026-04-02 01:14:04.715373 | orchestrator | "pacific", 2026-04-02 01:14:04.715380 | orchestrator | "elector-pinging", 2026-04-02 01:14:04.715386 | orchestrator | "quincy", 2026-04-02 01:14:04.715393 | orchestrator | "reef" 2026-04-02 01:14:04.715400 | orchestrator | ] 2026-04-02 01:14:04.715406 | orchestrator | }, 2026-04-02 01:14:04.715413 | orchestrator | "monmap": { 2026-04-02 01:14:04.715420 | orchestrator | "epoch": 1, 2026-04-02 01:14:04.715426 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2026-04-02 01:14:04.715433 | orchestrator | "modified": "2026-04-02T00:48:07.353289Z", 2026-04-02 01:14:04.715440 | orchestrator | "created": "2026-04-02T00:48:07.353289Z", 2026-04-02 01:14:04.715446 | orchestrator | "min_mon_release": 18, 2026-04-02 01:14:04.715453 | orchestrator | "min_mon_release_name": "reef", 2026-04-02 01:14:04.715460 | orchestrator | "election_strategy": 1, 2026-04-02 01:14:04.715467 | orchestrator | "disallowed_leaders": "", 2026-04-02 01:14:04.715474 | orchestrator | "stretch_mode": false, 2026-04-02 01:14:04.715481 | orchestrator | "tiebreaker_mon": "", 2026-04-02 01:14:04.715488 | orchestrator | "removed_ranks": "", 2026-04-02 01:14:04.715495 | orchestrator | "features": { 2026-04-02 01:14:04.715502 | orchestrator | "persistent": [ 2026-04-02 01:14:04.715508 | orchestrator | "kraken", 2026-04-02 01:14:04.715542 | orchestrator | "luminous", 2026-04-02 01:14:04.715549 | orchestrator | "mimic", 2026-04-02 01:14:04.715555 | orchestrator | "osdmap-prune", 2026-04-02 01:14:04.715562 | orchestrator | "nautilus", 2026-04-02 01:14:04.715569 | orchestrator | "octopus", 2026-04-02 01:14:04.715575 | orchestrator | "pacific", 2026-04-02 01:14:04.715582 | orchestrator | "elector-pinging", 2026-04-02 01:14:04.715588 | orchestrator | "quincy", 2026-04-02 01:14:04.715595 | orchestrator | "reef" 2026-04-02 01:14:04.715601 | orchestrator | ], 2026-04-02 01:14:04.715659 | orchestrator | "optional": [] 2026-04-02 01:14:04.715665 | orchestrator | }, 2026-04-02 01:14:04.715672 | orchestrator | "mons": [ 2026-04-02 01:14:04.715679 | orchestrator | { 2026-04-02 01:14:04.715686 | orchestrator | "rank": 0, 2026-04-02 01:14:04.715693 | orchestrator | "name": "testbed-node-0", 2026-04-02 01:14:04.715701 | orchestrator | "public_addrs": { 2026-04-02 01:14:04.715708 | orchestrator | "addrvec": [ 2026-04-02 01:14:04.715715 | orchestrator | { 2026-04-02 01:14:04.715722 | orchestrator | "type": "v2", 2026-04-02 01:14:04.715729 | orchestrator | "addr": "192.168.16.10:3300", 2026-04-02 01:14:04.715737 | orchestrator | "nonce": 0 2026-04-02 01:14:04.715745 | orchestrator | }, 2026-04-02 01:14:04.715753 | orchestrator | { 2026-04-02 01:14:04.715759 | orchestrator | "type": "v1", 2026-04-02 01:14:04.715767 | orchestrator | "addr": "192.168.16.10:6789", 2026-04-02 01:14:04.715774 | orchestrator | "nonce": 0 2026-04-02 01:14:04.715782 | orchestrator | } 2026-04-02 01:14:04.715790 | orchestrator | ] 2026-04-02 01:14:04.715798 | orchestrator | }, 2026-04-02 01:14:04.715805 | orchestrator | "addr": "192.168.16.10:6789/0", 2026-04-02 01:14:04.715812 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2026-04-02 01:14:04.715819 | orchestrator | "priority": 0, 2026-04-02 01:14:04.715826 | orchestrator | "weight": 0, 2026-04-02 01:14:04.715833 | orchestrator | "crush_location": "{}" 2026-04-02 01:14:04.715839 | orchestrator | }, 2026-04-02 01:14:04.715846 | orchestrator | { 2026-04-02 01:14:04.715853 | orchestrator | "rank": 1, 2026-04-02 01:14:04.715859 | orchestrator | "name": "testbed-node-1", 2026-04-02 01:14:04.715866 | orchestrator | "public_addrs": { 2026-04-02 01:14:04.715873 | orchestrator | "addrvec": [ 2026-04-02 01:14:04.715880 | orchestrator | { 2026-04-02 01:14:04.715888 | orchestrator | "type": "v2", 2026-04-02 01:14:04.715894 | orchestrator | "addr": "192.168.16.11:3300", 2026-04-02 01:14:04.715901 | orchestrator | "nonce": 0 2026-04-02 01:14:04.715908 | orchestrator | }, 2026-04-02 01:14:04.715914 | orchestrator | { 2026-04-02 01:14:04.715920 | orchestrator | "type": "v1", 2026-04-02 01:14:04.715927 | orchestrator | "addr": "192.168.16.11:6789", 2026-04-02 01:14:04.715950 | orchestrator | "nonce": 0 2026-04-02 01:14:04.715957 | orchestrator | } 2026-04-02 01:14:04.715963 | orchestrator | ] 2026-04-02 01:14:04.715970 | orchestrator | }, 2026-04-02 01:14:04.715976 | orchestrator | "addr": "192.168.16.11:6789/0", 2026-04-02 01:14:04.715983 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2026-04-02 01:14:04.715990 | orchestrator | "priority": 0, 2026-04-02 01:14:04.715996 | orchestrator | "weight": 0, 2026-04-02 01:14:04.716003 | orchestrator | "crush_location": "{}" 2026-04-02 01:14:04.716010 | orchestrator | }, 2026-04-02 01:14:04.716017 | orchestrator | { 2026-04-02 01:14:04.716023 | orchestrator | "rank": 2, 2026-04-02 01:14:04.716030 | orchestrator | "name": "testbed-node-2", 2026-04-02 01:14:04.716037 | orchestrator | "public_addrs": { 2026-04-02 01:14:04.716043 | orchestrator | "addrvec": [ 2026-04-02 01:14:04.716050 | orchestrator | { 2026-04-02 01:14:04.716056 | orchestrator | "type": "v2", 2026-04-02 01:14:04.716063 | orchestrator | "addr": "192.168.16.12:3300", 2026-04-02 01:14:04.716070 | orchestrator | "nonce": 0 2026-04-02 01:14:04.716076 | orchestrator | }, 2026-04-02 01:14:04.716082 | orchestrator | { 2026-04-02 01:14:04.716089 | orchestrator | "type": "v1", 2026-04-02 01:14:04.716095 | orchestrator | "addr": "192.168.16.12:6789", 2026-04-02 01:14:04.716102 | orchestrator | "nonce": 0 2026-04-02 01:14:04.716109 | orchestrator | } 2026-04-02 01:14:04.716115 | orchestrator | ] 2026-04-02 01:14:04.716122 | orchestrator | }, 2026-04-02 01:14:04.716129 | orchestrator | "addr": "192.168.16.12:6789/0", 2026-04-02 01:14:04.716136 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2026-04-02 01:14:04.716152 | orchestrator | "priority": 0, 2026-04-02 01:14:04.716159 | orchestrator | "weight": 0, 2026-04-02 01:14:04.716165 | orchestrator | "crush_location": "{}" 2026-04-02 01:14:04.716172 | orchestrator | } 2026-04-02 01:14:04.716178 | orchestrator | ] 2026-04-02 01:14:04.716185 | orchestrator | } 2026-04-02 01:14:04.716192 | orchestrator | } 2026-04-02 01:14:04.716300 | orchestrator | 2026-04-02 01:14:04.716309 | orchestrator | # Ceph free space status 2026-04-02 01:14:04.716316 | orchestrator | 2026-04-02 01:14:04.716323 | orchestrator | + echo 2026-04-02 01:14:04.716330 | orchestrator | + echo '# Ceph free space status' 2026-04-02 01:14:04.716337 | orchestrator | + echo 2026-04-02 01:14:04.716343 | orchestrator | + ceph df 2026-04-02 01:14:05.333898 | orchestrator | --- RAW STORAGE --- 2026-04-02 01:14:05.334068 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2026-04-02 01:14:05.334100 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.91 2026-04-02 01:14:05.334108 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.91 2026-04-02 01:14:05.334112 | orchestrator | 2026-04-02 01:14:05.334117 | orchestrator | --- POOLS --- 2026-04-02 01:14:05.334121 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2026-04-02 01:14:05.334127 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 52 GiB 2026-04-02 01:14:05.334131 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2026-04-02 01:14:05.334135 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2026-04-02 01:14:05.334139 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2026-04-02 01:14:05.334143 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2026-04-02 01:14:05.334147 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2026-04-02 01:14:05.334150 | orchestrator | default.rgw.log 7 32 3.6 KiB 209 408 KiB 0 35 GiB 2026-04-02 01:14:05.334154 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2026-04-02 01:14:05.334158 | orchestrator | .rgw.root 9 32 3.5 KiB 7 56 KiB 0 52 GiB 2026-04-02 01:14:05.334161 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2026-04-02 01:14:05.334165 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2026-04-02 01:14:05.334169 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.97 35 GiB 2026-04-02 01:14:05.334172 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2026-04-02 01:14:05.334176 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2026-04-02 01:14:05.379414 | orchestrator | ++ semver latest 5.0.0 2026-04-02 01:14:05.430303 | orchestrator | + [[ -1 -eq -1 ]] 2026-04-02 01:14:05.430386 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-02 01:14:05.430398 | orchestrator | + osism apply facts 2026-04-02 01:14:16.776795 | orchestrator | 2026-04-02 01:14:16 | INFO  | Prepare task for execution of facts. 2026-04-02 01:14:16.878240 | orchestrator | 2026-04-02 01:14:16 | INFO  | Task 7f5d08e0-a1c7-4bab-b603-bd8d61f881f5 (facts) was prepared for execution. 2026-04-02 01:14:16.878339 | orchestrator | 2026-04-02 01:14:16 | INFO  | It takes a moment until task 7f5d08e0-a1c7-4bab-b603-bd8d61f881f5 (facts) has been started and output is visible here. 2026-04-02 01:14:29.008186 | orchestrator | 2026-04-02 01:14:29.008238 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-04-02 01:14:29.008243 | orchestrator | 2026-04-02 01:14:29.008247 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-02 01:14:29.008250 | orchestrator | Thursday 02 April 2026 01:14:20 +0000 (0:00:00.378) 0:00:00.378 ******** 2026-04-02 01:14:29.008254 | orchestrator | ok: [testbed-manager] 2026-04-02 01:14:29.008258 | orchestrator | ok: [testbed-node-0] 2026-04-02 01:14:29.008261 | orchestrator | ok: [testbed-node-1] 2026-04-02 01:14:29.008264 | orchestrator | ok: [testbed-node-2] 2026-04-02 01:14:29.008267 | orchestrator | ok: [testbed-node-3] 2026-04-02 01:14:29.008270 | orchestrator | ok: [testbed-node-4] 2026-04-02 01:14:29.008274 | orchestrator | ok: [testbed-node-5] 2026-04-02 01:14:29.008287 | orchestrator | 2026-04-02 01:14:29.008291 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-02 01:14:29.008301 | orchestrator | Thursday 02 April 2026 01:14:22 +0000 (0:00:01.588) 0:00:01.966 ******** 2026-04-02 01:14:29.008304 | orchestrator | skipping: [testbed-manager] 2026-04-02 01:14:29.008307 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:14:29.008310 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:14:29.008313 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:14:29.008316 | orchestrator | skipping: [testbed-node-3] 2026-04-02 01:14:29.008319 | orchestrator | skipping: [testbed-node-4] 2026-04-02 01:14:29.008323 | orchestrator | skipping: [testbed-node-5] 2026-04-02 01:14:29.008326 | orchestrator | 2026-04-02 01:14:29.008329 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-02 01:14:29.008332 | orchestrator | 2026-04-02 01:14:29.008335 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-02 01:14:29.008338 | orchestrator | Thursday 02 April 2026 01:14:23 +0000 (0:00:01.290) 0:00:03.257 ******** 2026-04-02 01:14:29.008341 | orchestrator | ok: [testbed-node-2] 2026-04-02 01:14:29.008344 | orchestrator | ok: [testbed-node-0] 2026-04-02 01:14:29.008347 | orchestrator | ok: [testbed-node-1] 2026-04-02 01:14:29.008350 | orchestrator | ok: [testbed-node-3] 2026-04-02 01:14:29.008353 | orchestrator | ok: [testbed-node-4] 2026-04-02 01:14:29.008356 | orchestrator | ok: [testbed-manager] 2026-04-02 01:14:29.008359 | orchestrator | ok: [testbed-node-5] 2026-04-02 01:14:29.008362 | orchestrator | 2026-04-02 01:14:29.008365 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-02 01:14:29.008368 | orchestrator | 2026-04-02 01:14:29.008371 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-02 01:14:29.008375 | orchestrator | Thursday 02 April 2026 01:14:27 +0000 (0:00:04.652) 0:00:07.910 ******** 2026-04-02 01:14:29.008378 | orchestrator | skipping: [testbed-manager] 2026-04-02 01:14:29.008381 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:14:29.008384 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:14:29.008387 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:14:29.008390 | orchestrator | skipping: [testbed-node-3] 2026-04-02 01:14:29.008393 | orchestrator | skipping: [testbed-node-4] 2026-04-02 01:14:29.008396 | orchestrator | skipping: [testbed-node-5] 2026-04-02 01:14:29.008399 | orchestrator | 2026-04-02 01:14:29.008402 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-02 01:14:29.008406 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-02 01:14:29.008409 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-02 01:14:29.008413 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-02 01:14:29.008416 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-02 01:14:29.008419 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-02 01:14:29.008422 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-02 01:14:29.008425 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-02 01:14:29.008428 | orchestrator | 2026-04-02 01:14:29.008431 | orchestrator | 2026-04-02 01:14:29.008434 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-02 01:14:29.008437 | orchestrator | Thursday 02 April 2026 01:14:28 +0000 (0:00:00.739) 0:00:08.650 ******** 2026-04-02 01:14:29.008444 | orchestrator | =============================================================================== 2026-04-02 01:14:29.008447 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.65s 2026-04-02 01:14:29.008450 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.59s 2026-04-02 01:14:29.008453 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.29s 2026-04-02 01:14:29.008456 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.74s 2026-04-02 01:14:29.199849 | orchestrator | + osism validate ceph-mons 2026-04-02 01:14:59.950799 | orchestrator | 2026-04-02 01:14:59.950901 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2026-04-02 01:14:59.950913 | orchestrator | 2026-04-02 01:14:59.950920 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-04-02 01:14:59.950924 | orchestrator | Thursday 02 April 2026 01:14:44 +0000 (0:00:00.529) 0:00:00.529 ******** 2026-04-02 01:14:59.950931 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-02 01:14:59.950938 | orchestrator | 2026-04-02 01:14:59.950944 | orchestrator | TASK [Create report output directory] ****************************************** 2026-04-02 01:14:59.950950 | orchestrator | Thursday 02 April 2026 01:14:45 +0000 (0:00:00.983) 0:00:01.512 ******** 2026-04-02 01:14:59.950956 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-02 01:14:59.950963 | orchestrator | 2026-04-02 01:14:59.950969 | orchestrator | TASK [Define report vars] ****************************************************** 2026-04-02 01:14:59.950975 | orchestrator | Thursday 02 April 2026 01:14:45 +0000 (0:00:00.728) 0:00:02.240 ******** 2026-04-02 01:14:59.950982 | orchestrator | ok: [testbed-node-0] 2026-04-02 01:14:59.950989 | orchestrator | 2026-04-02 01:14:59.950995 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-04-02 01:14:59.950999 | orchestrator | Thursday 02 April 2026 01:14:46 +0000 (0:00:00.125) 0:00:02.366 ******** 2026-04-02 01:14:59.951004 | orchestrator | ok: [testbed-node-0] 2026-04-02 01:14:59.951008 | orchestrator | ok: [testbed-node-1] 2026-04-02 01:14:59.951015 | orchestrator | ok: [testbed-node-2] 2026-04-02 01:14:59.951020 | orchestrator | 2026-04-02 01:14:59.951027 | orchestrator | TASK [Get container info] ****************************************************** 2026-04-02 01:14:59.951033 | orchestrator | Thursday 02 April 2026 01:14:46 +0000 (0:00:00.283) 0:00:02.649 ******** 2026-04-02 01:14:59.951039 | orchestrator | ok: [testbed-node-1] 2026-04-02 01:14:59.951045 | orchestrator | ok: [testbed-node-2] 2026-04-02 01:14:59.951052 | orchestrator | ok: [testbed-node-0] 2026-04-02 01:14:59.951057 | orchestrator | 2026-04-02 01:14:59.951061 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-04-02 01:14:59.951065 | orchestrator | Thursday 02 April 2026 01:14:47 +0000 (0:00:01.667) 0:00:04.317 ******** 2026-04-02 01:14:59.951069 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:14:59.951074 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:14:59.951077 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:14:59.951081 | orchestrator | 2026-04-02 01:14:59.951085 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-04-02 01:14:59.951089 | orchestrator | Thursday 02 April 2026 01:14:48 +0000 (0:00:00.304) 0:00:04.621 ******** 2026-04-02 01:14:59.951093 | orchestrator | ok: [testbed-node-0] 2026-04-02 01:14:59.951097 | orchestrator | ok: [testbed-node-1] 2026-04-02 01:14:59.951101 | orchestrator | ok: [testbed-node-2] 2026-04-02 01:14:59.951104 | orchestrator | 2026-04-02 01:14:59.951108 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-02 01:14:59.951112 | orchestrator | Thursday 02 April 2026 01:14:48 +0000 (0:00:00.294) 0:00:04.916 ******** 2026-04-02 01:14:59.951116 | orchestrator | ok: [testbed-node-0] 2026-04-02 01:14:59.951120 | orchestrator | ok: [testbed-node-1] 2026-04-02 01:14:59.951123 | orchestrator | ok: [testbed-node-2] 2026-04-02 01:14:59.951127 | orchestrator | 2026-04-02 01:14:59.951131 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2026-04-02 01:14:59.951136 | orchestrator | Thursday 02 April 2026 01:14:48 +0000 (0:00:00.290) 0:00:05.206 ******** 2026-04-02 01:14:59.951166 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:14:59.951171 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:14:59.951175 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:14:59.951179 | orchestrator | 2026-04-02 01:14:59.951182 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2026-04-02 01:14:59.951186 | orchestrator | Thursday 02 April 2026 01:14:49 +0000 (0:00:00.440) 0:00:05.647 ******** 2026-04-02 01:14:59.951190 | orchestrator | ok: [testbed-node-0] 2026-04-02 01:14:59.951194 | orchestrator | ok: [testbed-node-1] 2026-04-02 01:14:59.951212 | orchestrator | ok: [testbed-node-2] 2026-04-02 01:14:59.951216 | orchestrator | 2026-04-02 01:14:59.951219 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-02 01:14:59.951223 | orchestrator | Thursday 02 April 2026 01:14:49 +0000 (0:00:00.310) 0:00:05.957 ******** 2026-04-02 01:14:59.951227 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:14:59.951230 | orchestrator | 2026-04-02 01:14:59.951234 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-02 01:14:59.951238 | orchestrator | Thursday 02 April 2026 01:14:49 +0000 (0:00:00.244) 0:00:06.202 ******** 2026-04-02 01:14:59.951242 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:14:59.951246 | orchestrator | 2026-04-02 01:14:59.951252 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-02 01:14:59.951258 | orchestrator | Thursday 02 April 2026 01:14:50 +0000 (0:00:00.246) 0:00:06.449 ******** 2026-04-02 01:14:59.951265 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:14:59.951271 | orchestrator | 2026-04-02 01:14:59.951278 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-02 01:14:59.951282 | orchestrator | Thursday 02 April 2026 01:14:50 +0000 (0:00:00.251) 0:00:06.700 ******** 2026-04-02 01:14:59.951286 | orchestrator | 2026-04-02 01:14:59.951290 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-02 01:14:59.951295 | orchestrator | Thursday 02 April 2026 01:14:50 +0000 (0:00:00.076) 0:00:06.777 ******** 2026-04-02 01:14:59.951301 | orchestrator | 2026-04-02 01:14:59.951307 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-02 01:14:59.951313 | orchestrator | Thursday 02 April 2026 01:14:50 +0000 (0:00:00.079) 0:00:06.857 ******** 2026-04-02 01:14:59.951319 | orchestrator | 2026-04-02 01:14:59.951325 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-02 01:14:59.951331 | orchestrator | Thursday 02 April 2026 01:14:50 +0000 (0:00:00.215) 0:00:07.072 ******** 2026-04-02 01:14:59.951337 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:14:59.951344 | orchestrator | 2026-04-02 01:14:59.951350 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-04-02 01:14:59.951354 | orchestrator | Thursday 02 April 2026 01:14:50 +0000 (0:00:00.240) 0:00:07.313 ******** 2026-04-02 01:14:59.951359 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:14:59.951363 | orchestrator | 2026-04-02 01:14:59.951381 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2026-04-02 01:14:59.951387 | orchestrator | Thursday 02 April 2026 01:14:51 +0000 (0:00:00.256) 0:00:07.569 ******** 2026-04-02 01:14:59.951393 | orchestrator | ok: [testbed-node-0] 2026-04-02 01:14:59.951399 | orchestrator | 2026-04-02 01:14:59.951404 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2026-04-02 01:14:59.951414 | orchestrator | Thursday 02 April 2026 01:14:51 +0000 (0:00:00.121) 0:00:07.691 ******** 2026-04-02 01:14:59.951422 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:14:59.951427 | orchestrator | 2026-04-02 01:14:59.951433 | orchestrator | TASK [Set quorum test data] **************************************************** 2026-04-02 01:14:59.951439 | orchestrator | Thursday 02 April 2026 01:14:52 +0000 (0:00:01.549) 0:00:09.240 ******** 2026-04-02 01:14:59.951445 | orchestrator | ok: [testbed-node-0] 2026-04-02 01:14:59.951452 | orchestrator | 2026-04-02 01:14:59.951458 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2026-04-02 01:14:59.951472 | orchestrator | Thursday 02 April 2026 01:14:53 +0000 (0:00:00.318) 0:00:09.559 ******** 2026-04-02 01:14:59.951478 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:14:59.951484 | orchestrator | 2026-04-02 01:14:59.951491 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2026-04-02 01:14:59.951497 | orchestrator | Thursday 02 April 2026 01:14:53 +0000 (0:00:00.115) 0:00:09.674 ******** 2026-04-02 01:14:59.951503 | orchestrator | ok: [testbed-node-0] 2026-04-02 01:14:59.951552 | orchestrator | 2026-04-02 01:14:59.951559 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2026-04-02 01:14:59.951572 | orchestrator | Thursday 02 April 2026 01:14:53 +0000 (0:00:00.335) 0:00:10.010 ******** 2026-04-02 01:14:59.951577 | orchestrator | ok: [testbed-node-0] 2026-04-02 01:14:59.951583 | orchestrator | 2026-04-02 01:14:59.951589 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2026-04-02 01:14:59.951595 | orchestrator | Thursday 02 April 2026 01:14:54 +0000 (0:00:00.328) 0:00:10.339 ******** 2026-04-02 01:14:59.951601 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:14:59.951607 | orchestrator | 2026-04-02 01:14:59.951614 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2026-04-02 01:14:59.951619 | orchestrator | Thursday 02 April 2026 01:14:54 +0000 (0:00:00.119) 0:00:10.458 ******** 2026-04-02 01:14:59.951625 | orchestrator | ok: [testbed-node-0] 2026-04-02 01:14:59.951631 | orchestrator | 2026-04-02 01:14:59.951638 | orchestrator | TASK [Prepare status test vars] ************************************************ 2026-04-02 01:14:59.951643 | orchestrator | Thursday 02 April 2026 01:14:54 +0000 (0:00:00.128) 0:00:10.587 ******** 2026-04-02 01:14:59.951647 | orchestrator | ok: [testbed-node-0] 2026-04-02 01:14:59.951651 | orchestrator | 2026-04-02 01:14:59.951656 | orchestrator | TASK [Gather status data] ****************************************************** 2026-04-02 01:14:59.951660 | orchestrator | Thursday 02 April 2026 01:14:54 +0000 (0:00:00.291) 0:00:10.878 ******** 2026-04-02 01:14:59.951665 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:14:59.951670 | orchestrator | 2026-04-02 01:14:59.951674 | orchestrator | TASK [Set health test data] **************************************************** 2026-04-02 01:14:59.951678 | orchestrator | Thursday 02 April 2026 01:14:55 +0000 (0:00:01.375) 0:00:12.253 ******** 2026-04-02 01:14:59.951683 | orchestrator | ok: [testbed-node-0] 2026-04-02 01:14:59.951686 | orchestrator | 2026-04-02 01:14:59.951690 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2026-04-02 01:14:59.951694 | orchestrator | Thursday 02 April 2026 01:14:56 +0000 (0:00:00.309) 0:00:12.563 ******** 2026-04-02 01:14:59.951697 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:14:59.951701 | orchestrator | 2026-04-02 01:14:59.951705 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2026-04-02 01:14:59.951709 | orchestrator | Thursday 02 April 2026 01:14:56 +0000 (0:00:00.130) 0:00:12.693 ******** 2026-04-02 01:14:59.951715 | orchestrator | ok: [testbed-node-0] 2026-04-02 01:14:59.951721 | orchestrator | 2026-04-02 01:14:59.951727 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2026-04-02 01:14:59.951733 | orchestrator | Thursday 02 April 2026 01:14:56 +0000 (0:00:00.148) 0:00:12.842 ******** 2026-04-02 01:14:59.951738 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:14:59.951744 | orchestrator | 2026-04-02 01:14:59.951750 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2026-04-02 01:14:59.951756 | orchestrator | Thursday 02 April 2026 01:14:56 +0000 (0:00:00.144) 0:00:12.987 ******** 2026-04-02 01:14:59.951762 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:14:59.951767 | orchestrator | 2026-04-02 01:14:59.951773 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-04-02 01:14:59.951779 | orchestrator | Thursday 02 April 2026 01:14:56 +0000 (0:00:00.139) 0:00:13.126 ******** 2026-04-02 01:14:59.951784 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-02 01:14:59.951789 | orchestrator | 2026-04-02 01:14:59.951795 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-04-02 01:14:59.951812 | orchestrator | Thursday 02 April 2026 01:14:57 +0000 (0:00:00.242) 0:00:13.369 ******** 2026-04-02 01:14:59.951817 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:14:59.951824 | orchestrator | 2026-04-02 01:14:59.951829 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-02 01:14:59.951835 | orchestrator | Thursday 02 April 2026 01:14:57 +0000 (0:00:00.244) 0:00:13.613 ******** 2026-04-02 01:14:59.951841 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-02 01:14:59.951847 | orchestrator | 2026-04-02 01:14:59.951853 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-02 01:14:59.951859 | orchestrator | Thursday 02 April 2026 01:14:59 +0000 (0:00:01.764) 0:00:15.378 ******** 2026-04-02 01:14:59.951865 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-02 01:14:59.951871 | orchestrator | 2026-04-02 01:14:59.951877 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-02 01:14:59.951882 | orchestrator | Thursday 02 April 2026 01:14:59 +0000 (0:00:00.256) 0:00:15.635 ******** 2026-04-02 01:14:59.951888 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-02 01:14:59.951894 | orchestrator | 2026-04-02 01:14:59.951910 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-02 01:15:02.193709 | orchestrator | Thursday 02 April 2026 01:14:59 +0000 (0:00:00.632) 0:00:16.268 ******** 2026-04-02 01:15:02.193757 | orchestrator | 2026-04-02 01:15:02.193763 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-02 01:15:02.193767 | orchestrator | Thursday 02 April 2026 01:15:00 +0000 (0:00:00.075) 0:00:16.343 ******** 2026-04-02 01:15:02.193771 | orchestrator | 2026-04-02 01:15:02.193775 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-02 01:15:02.193779 | orchestrator | Thursday 02 April 2026 01:15:00 +0000 (0:00:00.078) 0:00:16.421 ******** 2026-04-02 01:15:02.193783 | orchestrator | 2026-04-02 01:15:02.193786 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-04-02 01:15:02.193790 | orchestrator | Thursday 02 April 2026 01:15:00 +0000 (0:00:00.072) 0:00:16.494 ******** 2026-04-02 01:15:02.193794 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-02 01:15:02.193798 | orchestrator | 2026-04-02 01:15:02.193802 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-02 01:15:02.193806 | orchestrator | Thursday 02 April 2026 01:15:01 +0000 (0:00:01.314) 0:00:17.808 ******** 2026-04-02 01:15:02.193809 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-04-02 01:15:02.193813 | orchestrator |  "msg": [ 2026-04-02 01:15:02.193818 | orchestrator |  "Validator run completed.", 2026-04-02 01:15:02.193822 | orchestrator |  "You can find the report file here:", 2026-04-02 01:15:02.193826 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2026-04-02T01:14:45+00:00-report.json", 2026-04-02 01:15:02.193830 | orchestrator |  "on the following host:", 2026-04-02 01:15:02.193834 | orchestrator |  "testbed-manager" 2026-04-02 01:15:02.193839 | orchestrator |  ] 2026-04-02 01:15:02.193845 | orchestrator | } 2026-04-02 01:15:02.193856 | orchestrator | 2026-04-02 01:15:02.193863 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-02 01:15:02.193869 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-04-02 01:15:02.193876 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-02 01:15:02.193882 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-02 01:15:02.193888 | orchestrator | 2026-04-02 01:15:02.193895 | orchestrator | 2026-04-02 01:15:02.193901 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-02 01:15:02.193924 | orchestrator | Thursday 02 April 2026 01:15:01 +0000 (0:00:00.398) 0:00:18.206 ******** 2026-04-02 01:15:02.193928 | orchestrator | =============================================================================== 2026-04-02 01:15:02.193932 | orchestrator | Aggregate test results step one ----------------------------------------- 1.76s 2026-04-02 01:15:02.193936 | orchestrator | Get container info ------------------------------------------------------ 1.67s 2026-04-02 01:15:02.193940 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.55s 2026-04-02 01:15:02.193943 | orchestrator | Gather status data ------------------------------------------------------ 1.38s 2026-04-02 01:15:02.193947 | orchestrator | Write report file ------------------------------------------------------- 1.31s 2026-04-02 01:15:02.193951 | orchestrator | Get timestamp for report file ------------------------------------------- 0.98s 2026-04-02 01:15:02.193954 | orchestrator | Create report output directory ------------------------------------------ 0.73s 2026-04-02 01:15:02.193958 | orchestrator | Aggregate test results step three --------------------------------------- 0.63s 2026-04-02 01:15:02.193962 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.44s 2026-04-02 01:15:02.193966 | orchestrator | Print report file information ------------------------------------------- 0.40s 2026-04-02 01:15:02.193969 | orchestrator | Flush handlers ---------------------------------------------------------- 0.37s 2026-04-02 01:15:02.193973 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.34s 2026-04-02 01:15:02.193977 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.33s 2026-04-02 01:15:02.193980 | orchestrator | Set quorum test data ---------------------------------------------------- 0.32s 2026-04-02 01:15:02.193984 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.31s 2026-04-02 01:15:02.193988 | orchestrator | Set health test data ---------------------------------------------------- 0.31s 2026-04-02 01:15:02.193994 | orchestrator | Set test result to failed if container is missing ----------------------- 0.30s 2026-04-02 01:15:02.194000 | orchestrator | Set test result to passed if container is existing ---------------------- 0.29s 2026-04-02 01:15:02.194005 | orchestrator | Prepare status test vars ------------------------------------------------ 0.29s 2026-04-02 01:15:02.194010 | orchestrator | Prepare test data ------------------------------------------------------- 0.29s 2026-04-02 01:15:02.386915 | orchestrator | + osism validate ceph-mgrs 2026-04-02 01:15:31.419745 | orchestrator | 2026-04-02 01:15:31.419830 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2026-04-02 01:15:31.419838 | orchestrator | 2026-04-02 01:15:31.419842 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-04-02 01:15:31.419847 | orchestrator | Thursday 02 April 2026 01:15:17 +0000 (0:00:00.513) 0:00:00.513 ******** 2026-04-02 01:15:31.419852 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-02 01:15:31.419856 | orchestrator | 2026-04-02 01:15:31.419860 | orchestrator | TASK [Create report output directory] ****************************************** 2026-04-02 01:15:31.419864 | orchestrator | Thursday 02 April 2026 01:15:18 +0000 (0:00:00.987) 0:00:01.501 ******** 2026-04-02 01:15:31.419869 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-02 01:15:31.419873 | orchestrator | 2026-04-02 01:15:31.419878 | orchestrator | TASK [Define report vars] ****************************************************** 2026-04-02 01:15:31.419883 | orchestrator | Thursday 02 April 2026 01:15:19 +0000 (0:00:00.679) 0:00:02.180 ******** 2026-04-02 01:15:31.419889 | orchestrator | ok: [testbed-node-0] 2026-04-02 01:15:31.419896 | orchestrator | 2026-04-02 01:15:31.419910 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-04-02 01:15:31.419916 | orchestrator | Thursday 02 April 2026 01:15:19 +0000 (0:00:00.117) 0:00:02.298 ******** 2026-04-02 01:15:31.419922 | orchestrator | ok: [testbed-node-0] 2026-04-02 01:15:31.419928 | orchestrator | ok: [testbed-node-1] 2026-04-02 01:15:31.419933 | orchestrator | ok: [testbed-node-2] 2026-04-02 01:15:31.419938 | orchestrator | 2026-04-02 01:15:31.419959 | orchestrator | TASK [Get container info] ****************************************************** 2026-04-02 01:15:31.419966 | orchestrator | Thursday 02 April 2026 01:15:19 +0000 (0:00:00.279) 0:00:02.577 ******** 2026-04-02 01:15:31.419972 | orchestrator | ok: [testbed-node-1] 2026-04-02 01:15:31.419978 | orchestrator | ok: [testbed-node-0] 2026-04-02 01:15:31.419984 | orchestrator | ok: [testbed-node-2] 2026-04-02 01:15:31.419991 | orchestrator | 2026-04-02 01:15:31.419997 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-04-02 01:15:31.420002 | orchestrator | Thursday 02 April 2026 01:15:20 +0000 (0:00:01.418) 0:00:03.996 ******** 2026-04-02 01:15:31.420006 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:15:31.420010 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:15:31.420017 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:15:31.420021 | orchestrator | 2026-04-02 01:15:31.420025 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-04-02 01:15:31.420028 | orchestrator | Thursday 02 April 2026 01:15:21 +0000 (0:00:00.311) 0:00:04.307 ******** 2026-04-02 01:15:31.420032 | orchestrator | ok: [testbed-node-0] 2026-04-02 01:15:31.420036 | orchestrator | ok: [testbed-node-1] 2026-04-02 01:15:31.420040 | orchestrator | ok: [testbed-node-2] 2026-04-02 01:15:31.420043 | orchestrator | 2026-04-02 01:15:31.420047 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-02 01:15:31.420051 | orchestrator | Thursday 02 April 2026 01:15:21 +0000 (0:00:00.305) 0:00:04.613 ******** 2026-04-02 01:15:31.420054 | orchestrator | ok: [testbed-node-0] 2026-04-02 01:15:31.420058 | orchestrator | ok: [testbed-node-1] 2026-04-02 01:15:31.420062 | orchestrator | ok: [testbed-node-2] 2026-04-02 01:15:31.420065 | orchestrator | 2026-04-02 01:15:31.420069 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2026-04-02 01:15:31.420073 | orchestrator | Thursday 02 April 2026 01:15:21 +0000 (0:00:00.311) 0:00:04.924 ******** 2026-04-02 01:15:31.420077 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:15:31.420081 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:15:31.420084 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:15:31.420088 | orchestrator | 2026-04-02 01:15:31.420092 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2026-04-02 01:15:31.420095 | orchestrator | Thursday 02 April 2026 01:15:22 +0000 (0:00:00.449) 0:00:05.374 ******** 2026-04-02 01:15:31.420099 | orchestrator | ok: [testbed-node-0] 2026-04-02 01:15:31.420103 | orchestrator | ok: [testbed-node-1] 2026-04-02 01:15:31.420107 | orchestrator | ok: [testbed-node-2] 2026-04-02 01:15:31.420110 | orchestrator | 2026-04-02 01:15:31.420114 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-02 01:15:31.420118 | orchestrator | Thursday 02 April 2026 01:15:22 +0000 (0:00:00.343) 0:00:05.717 ******** 2026-04-02 01:15:31.420121 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:15:31.420125 | orchestrator | 2026-04-02 01:15:31.420129 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-02 01:15:31.420133 | orchestrator | Thursday 02 April 2026 01:15:22 +0000 (0:00:00.253) 0:00:05.971 ******** 2026-04-02 01:15:31.420137 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:15:31.420140 | orchestrator | 2026-04-02 01:15:31.420144 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-02 01:15:31.420148 | orchestrator | Thursday 02 April 2026 01:15:23 +0000 (0:00:00.231) 0:00:06.203 ******** 2026-04-02 01:15:31.420152 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:15:31.420155 | orchestrator | 2026-04-02 01:15:31.420159 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-02 01:15:31.420163 | orchestrator | Thursday 02 April 2026 01:15:23 +0000 (0:00:00.300) 0:00:06.503 ******** 2026-04-02 01:15:31.420167 | orchestrator | 2026-04-02 01:15:31.420170 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-02 01:15:31.420174 | orchestrator | Thursday 02 April 2026 01:15:23 +0000 (0:00:00.082) 0:00:06.586 ******** 2026-04-02 01:15:31.420178 | orchestrator | 2026-04-02 01:15:31.420186 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-02 01:15:31.420190 | orchestrator | Thursday 02 April 2026 01:15:23 +0000 (0:00:00.078) 0:00:06.664 ******** 2026-04-02 01:15:31.420193 | orchestrator | 2026-04-02 01:15:31.420197 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-02 01:15:31.420201 | orchestrator | Thursday 02 April 2026 01:15:23 +0000 (0:00:00.226) 0:00:06.890 ******** 2026-04-02 01:15:31.420205 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:15:31.420208 | orchestrator | 2026-04-02 01:15:31.420212 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-04-02 01:15:31.420216 | orchestrator | Thursday 02 April 2026 01:15:24 +0000 (0:00:00.268) 0:00:07.159 ******** 2026-04-02 01:15:31.420220 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:15:31.420223 | orchestrator | 2026-04-02 01:15:31.420238 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2026-04-02 01:15:31.420242 | orchestrator | Thursday 02 April 2026 01:15:24 +0000 (0:00:00.263) 0:00:07.422 ******** 2026-04-02 01:15:31.420246 | orchestrator | ok: [testbed-node-0] 2026-04-02 01:15:31.420250 | orchestrator | 2026-04-02 01:15:31.420254 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2026-04-02 01:15:31.420257 | orchestrator | Thursday 02 April 2026 01:15:24 +0000 (0:00:00.135) 0:00:07.558 ******** 2026-04-02 01:15:31.420261 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:15:31.420265 | orchestrator | 2026-04-02 01:15:31.420269 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2026-04-02 01:15:31.420272 | orchestrator | Thursday 02 April 2026 01:15:26 +0000 (0:00:01.645) 0:00:09.203 ******** 2026-04-02 01:15:31.420276 | orchestrator | ok: [testbed-node-0] 2026-04-02 01:15:31.420281 | orchestrator | 2026-04-02 01:15:31.420288 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2026-04-02 01:15:31.420294 | orchestrator | Thursday 02 April 2026 01:15:26 +0000 (0:00:00.243) 0:00:09.446 ******** 2026-04-02 01:15:31.420300 | orchestrator | ok: [testbed-node-0] 2026-04-02 01:15:31.420305 | orchestrator | 2026-04-02 01:15:31.420312 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2026-04-02 01:15:31.420318 | orchestrator | Thursday 02 April 2026 01:15:26 +0000 (0:00:00.311) 0:00:09.758 ******** 2026-04-02 01:15:31.420324 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:15:31.420330 | orchestrator | 2026-04-02 01:15:31.420335 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2026-04-02 01:15:31.420341 | orchestrator | Thursday 02 April 2026 01:15:26 +0000 (0:00:00.135) 0:00:09.893 ******** 2026-04-02 01:15:31.420349 | orchestrator | ok: [testbed-node-0] 2026-04-02 01:15:31.420355 | orchestrator | 2026-04-02 01:15:31.420362 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-04-02 01:15:31.420369 | orchestrator | Thursday 02 April 2026 01:15:26 +0000 (0:00:00.139) 0:00:10.033 ******** 2026-04-02 01:15:31.420375 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-02 01:15:31.420381 | orchestrator | 2026-04-02 01:15:31.420387 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-04-02 01:15:31.420399 | orchestrator | Thursday 02 April 2026 01:15:27 +0000 (0:00:00.248) 0:00:10.282 ******** 2026-04-02 01:15:31.420406 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:15:31.420413 | orchestrator | 2026-04-02 01:15:31.420419 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-02 01:15:31.420426 | orchestrator | Thursday 02 April 2026 01:15:27 +0000 (0:00:00.234) 0:00:10.516 ******** 2026-04-02 01:15:31.420433 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-02 01:15:31.420439 | orchestrator | 2026-04-02 01:15:31.420445 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-02 01:15:31.420451 | orchestrator | Thursday 02 April 2026 01:15:28 +0000 (0:00:01.519) 0:00:12.036 ******** 2026-04-02 01:15:31.420506 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-02 01:15:31.420520 | orchestrator | 2026-04-02 01:15:31.420527 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-02 01:15:31.420532 | orchestrator | Thursday 02 April 2026 01:15:29 +0000 (0:00:00.282) 0:00:12.319 ******** 2026-04-02 01:15:31.420536 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-02 01:15:31.420541 | orchestrator | 2026-04-02 01:15:31.420545 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-02 01:15:31.420550 | orchestrator | Thursday 02 April 2026 01:15:29 +0000 (0:00:00.257) 0:00:12.577 ******** 2026-04-02 01:15:31.420554 | orchestrator | 2026-04-02 01:15:31.420559 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-02 01:15:31.420563 | orchestrator | Thursday 02 April 2026 01:15:29 +0000 (0:00:00.070) 0:00:12.647 ******** 2026-04-02 01:15:31.420567 | orchestrator | 2026-04-02 01:15:31.420572 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-02 01:15:31.420576 | orchestrator | Thursday 02 April 2026 01:15:29 +0000 (0:00:00.067) 0:00:12.714 ******** 2026-04-02 01:15:31.420580 | orchestrator | 2026-04-02 01:15:31.420585 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-04-02 01:15:31.420589 | orchestrator | Thursday 02 April 2026 01:15:29 +0000 (0:00:00.073) 0:00:12.788 ******** 2026-04-02 01:15:31.420594 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-02 01:15:31.420598 | orchestrator | 2026-04-02 01:15:31.420603 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-02 01:15:31.420607 | orchestrator | Thursday 02 April 2026 01:15:30 +0000 (0:00:01.308) 0:00:14.096 ******** 2026-04-02 01:15:31.420611 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-04-02 01:15:31.420616 | orchestrator |  "msg": [ 2026-04-02 01:15:31.420620 | orchestrator |  "Validator run completed.", 2026-04-02 01:15:31.420625 | orchestrator |  "You can find the report file here:", 2026-04-02 01:15:31.420630 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2026-04-02T01:15:18+00:00-report.json", 2026-04-02 01:15:31.420635 | orchestrator |  "on the following host:", 2026-04-02 01:15:31.420640 | orchestrator |  "testbed-manager" 2026-04-02 01:15:31.420644 | orchestrator |  ] 2026-04-02 01:15:31.420649 | orchestrator | } 2026-04-02 01:15:31.420653 | orchestrator | 2026-04-02 01:15:31.420658 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-02 01:15:31.420663 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-02 01:15:31.420669 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-02 01:15:31.420680 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-02 01:15:31.767363 | orchestrator | 2026-04-02 01:15:31.767566 | orchestrator | 2026-04-02 01:15:31.767588 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-02 01:15:31.767598 | orchestrator | Thursday 02 April 2026 01:15:31 +0000 (0:00:00.413) 0:00:14.509 ******** 2026-04-02 01:15:31.767604 | orchestrator | =============================================================================== 2026-04-02 01:15:31.767611 | orchestrator | Gather list of mgr modules ---------------------------------------------- 1.65s 2026-04-02 01:15:31.767617 | orchestrator | Aggregate test results step one ----------------------------------------- 1.52s 2026-04-02 01:15:31.767622 | orchestrator | Get container info ------------------------------------------------------ 1.42s 2026-04-02 01:15:31.767626 | orchestrator | Write report file ------------------------------------------------------- 1.31s 2026-04-02 01:15:31.767630 | orchestrator | Get timestamp for report file ------------------------------------------- 0.99s 2026-04-02 01:15:31.767634 | orchestrator | Create report output directory ------------------------------------------ 0.68s 2026-04-02 01:15:31.767660 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.45s 2026-04-02 01:15:31.767664 | orchestrator | Print report file information ------------------------------------------- 0.41s 2026-04-02 01:15:31.767668 | orchestrator | Flush handlers ---------------------------------------------------------- 0.39s 2026-04-02 01:15:31.767672 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.34s 2026-04-02 01:15:31.767676 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.31s 2026-04-02 01:15:31.767680 | orchestrator | Prepare test data ------------------------------------------------------- 0.31s 2026-04-02 01:15:31.767684 | orchestrator | Set test result to failed if container is missing ----------------------- 0.31s 2026-04-02 01:15:31.767689 | orchestrator | Set test result to passed if container is existing ---------------------- 0.31s 2026-04-02 01:15:31.767693 | orchestrator | Aggregate test results step three --------------------------------------- 0.30s 2026-04-02 01:15:31.767697 | orchestrator | Aggregate test results step two ----------------------------------------- 0.28s 2026-04-02 01:15:31.767701 | orchestrator | Prepare test data for container existance test -------------------------- 0.28s 2026-04-02 01:15:31.767705 | orchestrator | Print report file information ------------------------------------------- 0.27s 2026-04-02 01:15:31.767710 | orchestrator | Fail due to missing containers ------------------------------------------ 0.26s 2026-04-02 01:15:31.767714 | orchestrator | Aggregate test results step three --------------------------------------- 0.26s 2026-04-02 01:15:31.946748 | orchestrator | + osism validate ceph-osds 2026-04-02 01:15:50.742899 | orchestrator | 2026-04-02 01:15:50.742975 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2026-04-02 01:15:50.742988 | orchestrator | 2026-04-02 01:15:50.742998 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-04-02 01:15:50.743009 | orchestrator | Thursday 02 April 2026 01:15:46 +0000 (0:00:00.495) 0:00:00.495 ******** 2026-04-02 01:15:50.743019 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-02 01:15:50.743028 | orchestrator | 2026-04-02 01:15:50.743038 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-02 01:15:50.743047 | orchestrator | Thursday 02 April 2026 01:15:47 +0000 (0:00:00.965) 0:00:01.461 ******** 2026-04-02 01:15:50.743056 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-02 01:15:50.743066 | orchestrator | 2026-04-02 01:15:50.743075 | orchestrator | TASK [Create report output directory] ****************************************** 2026-04-02 01:15:50.743084 | orchestrator | Thursday 02 April 2026 01:15:48 +0000 (0:00:00.246) 0:00:01.707 ******** 2026-04-02 01:15:50.743093 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-02 01:15:50.743103 | orchestrator | 2026-04-02 01:15:50.743112 | orchestrator | TASK [Define report vars] ****************************************************** 2026-04-02 01:15:50.743121 | orchestrator | Thursday 02 April 2026 01:15:48 +0000 (0:00:00.702) 0:00:02.409 ******** 2026-04-02 01:15:50.743131 | orchestrator | ok: [testbed-node-3] 2026-04-02 01:15:50.743141 | orchestrator | 2026-04-02 01:15:50.743149 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-04-02 01:15:50.743158 | orchestrator | Thursday 02 April 2026 01:15:48 +0000 (0:00:00.123) 0:00:02.533 ******** 2026-04-02 01:15:50.743168 | orchestrator | skipping: [testbed-node-3] 2026-04-02 01:15:50.743177 | orchestrator | 2026-04-02 01:15:50.743187 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-04-02 01:15:50.743196 | orchestrator | Thursday 02 April 2026 01:15:49 +0000 (0:00:00.131) 0:00:02.665 ******** 2026-04-02 01:15:50.743205 | orchestrator | skipping: [testbed-node-3] 2026-04-02 01:15:50.743215 | orchestrator | skipping: [testbed-node-4] 2026-04-02 01:15:50.743224 | orchestrator | skipping: [testbed-node-5] 2026-04-02 01:15:50.743234 | orchestrator | 2026-04-02 01:15:50.743242 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-04-02 01:15:50.743252 | orchestrator | Thursday 02 April 2026 01:15:49 +0000 (0:00:00.424) 0:00:03.089 ******** 2026-04-02 01:15:50.743262 | orchestrator | ok: [testbed-node-3] 2026-04-02 01:15:50.743291 | orchestrator | 2026-04-02 01:15:50.743302 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-04-02 01:15:50.743312 | orchestrator | Thursday 02 April 2026 01:15:49 +0000 (0:00:00.162) 0:00:03.252 ******** 2026-04-02 01:15:50.743322 | orchestrator | ok: [testbed-node-3] 2026-04-02 01:15:50.743330 | orchestrator | ok: [testbed-node-4] 2026-04-02 01:15:50.743340 | orchestrator | ok: [testbed-node-5] 2026-04-02 01:15:50.743350 | orchestrator | 2026-04-02 01:15:50.743361 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2026-04-02 01:15:50.743383 | orchestrator | Thursday 02 April 2026 01:15:49 +0000 (0:00:00.317) 0:00:03.570 ******** 2026-04-02 01:15:50.743393 | orchestrator | ok: [testbed-node-3] 2026-04-02 01:15:50.743403 | orchestrator | 2026-04-02 01:15:50.743413 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-02 01:15:50.743507 | orchestrator | Thursday 02 April 2026 01:15:50 +0000 (0:00:00.337) 0:00:03.907 ******** 2026-04-02 01:15:50.743521 | orchestrator | ok: [testbed-node-3] 2026-04-02 01:15:50.743531 | orchestrator | ok: [testbed-node-4] 2026-04-02 01:15:50.743541 | orchestrator | ok: [testbed-node-5] 2026-04-02 01:15:50.743552 | orchestrator | 2026-04-02 01:15:50.743562 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2026-04-02 01:15:50.743572 | orchestrator | Thursday 02 April 2026 01:15:50 +0000 (0:00:00.302) 0:00:04.209 ******** 2026-04-02 01:15:50.743584 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'efd71bd071c3c8177a19e0683136ce8f0214cf84831e1c689343c0419b02c4e7', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2026-04-02 01:15:50.743595 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'f5776a56a99f074d3a211c469aa5f4a15cd295b4cc54eb8030b32b8c8177c026', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2026-04-02 01:15:50.743606 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'b15a5b98aa20cd814b472d6ea1676960fd60a4f370959498225d45017c41d0bb', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2026-04-02 01:15:50.743616 | orchestrator | skipping: [testbed-node-3] => (item={'id': '6a8f9ffe2c157e5bb9b5a1f5b8a4d57258d5e509ba9296ebae6b345fc05f073e', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2026-04-02 01:15:50.743639 | orchestrator | skipping: [testbed-node-3] => (item={'id': '1294d8fb65efc7e7b48676cc0c9a1e4a0f990429cca3814493ca179d3ce5091c', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2026-04-02 01:15:50.743665 | orchestrator | skipping: [testbed-node-3] => (item={'id': '987eb1719aae0c0dc37971bea1901eb28001866be32d97ec0cd7639cf0cb2cc9', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 14 minutes'})  2026-04-02 01:15:50.743675 | orchestrator | skipping: [testbed-node-3] => (item={'id': '7d3f479805103fce9f12064e35decaad1d77b2ccd059bd4151d29eaa7f0c591c', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2026-04-02 01:15:50.743685 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'b22ff625eba6122932ba83b9a4cfdf2736b53c59c544992016d33963b34775e9', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up 20 minutes'})  2026-04-02 01:15:50.743694 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'c4b92dd41acdf8497cb848030c2a4d8d4f69a67155493614a4463e04d50bd124', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up 21 minutes'})  2026-04-02 01:15:50.743713 | orchestrator | skipping: [testbed-node-3] => (item={'id': '0c8b027783b91c5e1c3871c0973b67dd3794bfbbb3d5c50c1a71c8fb0b8a79f0', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up 22 minutes'})  2026-04-02 01:15:50.743723 | orchestrator | ok: [testbed-node-3] => (item={'id': '293358f4134cc7558a8ca7f6d218da77842c8ff6b90ee5778be688fa4b37d781', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up 23 minutes'}) 2026-04-02 01:15:50.743733 | orchestrator | ok: [testbed-node-3] => (item={'id': '1b2c9f25e74699c5ab69a7acb8baaf80e2d39062b6501956105366fc21cd31bf', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up 23 minutes'}) 2026-04-02 01:15:50.743743 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'f152d4ad03d34ae09d2b5764c3bfe7b68024bda8d73c9cdb32ab17a186f7be94', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 25 minutes'})  2026-04-02 01:15:50.743752 | orchestrator | skipping: [testbed-node-3] => (item={'id': '0bc19236444795069b7ed63a9a938382583b3f2cf7f4ebd891778255fd0bb30e', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 27 minutes (healthy)'})  2026-04-02 01:15:50.743762 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'a40dff2ef908cdc3da0341f195aec4bcbb8a3169d00966463084e0d68f9ad4a7', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 27 minutes (healthy)'})  2026-04-02 01:15:50.743771 | orchestrator | skipping: [testbed-node-3] => (item={'id': '6863f5540fcf423df44f3b4acbd35b3243f7dd00584c2f5ebe66cb228b47f314', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 28 minutes'})  2026-04-02 01:15:50.743781 | orchestrator | skipping: [testbed-node-3] => (item={'id': '09ffa1e35d4cf7ed51fd88c3aeb8b061dfe28b3c13363b0c8dc8a46674f7fd09', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 28 minutes'})  2026-04-02 01:15:50.743790 | orchestrator | skipping: [testbed-node-3] => (item={'id': '8971d8b572753a4f164688507c5b48bb5ee94eda3703d7ec7e8c36ab319870f3', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 29 minutes'})  2026-04-02 01:15:50.743800 | orchestrator | skipping: [testbed-node-4] => (item={'id': '839e1f595fee57ecf9efcf138271e4022a733b864ad502d3d9ab314ebadb3186', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2026-04-02 01:15:50.743809 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'ce938b847fc581793233da9ef3c9962f05079df8b62c95af1806bddee17f1720', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2026-04-02 01:15:50.743822 | orchestrator | skipping: [testbed-node-4] => (item={'id': '78f2fa891235e8c116fed6269f828ce63c8d02e3bf4c1ed086a4508290b1ae91', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2026-04-02 01:15:50.743839 | orchestrator | skipping: [testbed-node-4] => (item={'id': '7cfb3f6fdcc1aa86ad440e800ab34535e44e1d821a4f002fc79e48697bce4b1b', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2026-04-02 01:15:50.945177 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'c08ca2fd96e5658bb1bf5b372d5a2279dd3fa956e4d44e6a06d60f22b898124b', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2026-04-02 01:15:50.945265 | orchestrator | skipping: [testbed-node-4] => (item={'id': '631875c6fb9870b2e35b41a37e71817b3a7ec61b1f505965e8049814a0451472', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 14 minutes'})  2026-04-02 01:15:50.945279 | orchestrator | skipping: [testbed-node-4] => (item={'id': '11e034593e95867c1687146f3d32d4014480a473be0c55e9d409b16dfbd6a58a', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2026-04-02 01:15:50.945289 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'db77dccbdd427a4a7b3d5f10d3fec31b0157f25049320eda5e2927a856c54665', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up 20 minutes'})  2026-04-02 01:15:50.945298 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'cc140c8360588f688960f8b13c959a88575eac63a805528bf09d46e55980f078', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up 21 minutes'})  2026-04-02 01:15:50.945308 | orchestrator | skipping: [testbed-node-4] => (item={'id': '6bffb6b79843e4dc64f5809bf168b2239589dcc50c258e0fd201f3d2a4fbcfa2', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up 22 minutes'})  2026-04-02 01:15:50.945319 | orchestrator | ok: [testbed-node-4] => (item={'id': '0787aab133c5379e7aaf8cfeeb2f21283d8491e5b873583781d21fb6d550af46', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up 23 minutes'}) 2026-04-02 01:15:50.945329 | orchestrator | ok: [testbed-node-4] => (item={'id': '44334d352a4c4ae23520f76c8f96a43c0fe694a56d8459907868424ac3d4ad25', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up 23 minutes'}) 2026-04-02 01:15:50.945338 | orchestrator | skipping: [testbed-node-4] => (item={'id': '37f4da51633c2ca5952a07ab88483d1da2d874857b5216c17176d4d63cbd192b', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 25 minutes'})  2026-04-02 01:15:50.945347 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'ef8eb4b4f104a8d8ee2dd5c16c6bdffef944eedb047d82fd0734a9a0a0998b10', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 27 minutes (healthy)'})  2026-04-02 01:15:50.945356 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'cfc8f21a80116038e0d502a95368d70fb630f3d5fe1a6e2c8b0437ed34002f5b', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 27 minutes (healthy)'})  2026-04-02 01:15:50.945365 | orchestrator | skipping: [testbed-node-4] => (item={'id': '82ee14a07ae50bd028d0264c49269ddd119320197284cd07dfe32c88819f072e', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 28 minutes'})  2026-04-02 01:15:50.945374 | orchestrator | skipping: [testbed-node-4] => (item={'id': '6a33468e7851af422b41acc5a71cce9ef83c30d6b50307f70c73cdf754610b91', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 28 minutes'})  2026-04-02 01:15:50.945383 | orchestrator | skipping: [testbed-node-4] => (item={'id': '6f8fb1b7edb4fdc230cdd4c8f9577f741df9e0ab208aaed7390a00bbcb79498e', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 29 minutes'})  2026-04-02 01:15:50.945392 | orchestrator | skipping: [testbed-node-5] => (item={'id': '805a896a4d46c0c7bfcb48bd72426ebbf88f33855ecb4eb6bca9b4069cc242da', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2026-04-02 01:15:50.945419 | orchestrator | skipping: [testbed-node-5] => (item={'id': '941fa90ebcd6eb2a0547f2ab803fe20c9d112849d7e8948ce35dafd79dcf26d3', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2026-04-02 01:15:50.945477 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'c26bccfd4891f276f41ad5f12f0e0d066d39373690ea2d9d8c235af67558d360', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2026-04-02 01:15:50.945487 | orchestrator | skipping: [testbed-node-5] => (item={'id': '319d9e13e8a8be55bfdf668d4c237b99960d23575fdcd580b39ae2931f6ccac7', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2026-04-02 01:15:50.945494 | orchestrator | skipping: [testbed-node-5] => (item={'id': '32a5ade81c6070bb59eab5b410c2f621d0624a63c69131d515bb42c82c9c8536', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2026-04-02 01:15:50.945502 | orchestrator | skipping: [testbed-node-5] => (item={'id': '10a9f9752ec3671b821c9716fe87acd469da8848887839b86ef2fa75a07c2c2c', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 14 minutes'})  2026-04-02 01:15:50.945510 | orchestrator | skipping: [testbed-node-5] => (item={'id': '1219c0724711837c9dcdf67a4b1cd134151417a4cfb30c740581826a44f0088a', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2026-04-02 01:15:50.945518 | orchestrator | skipping: [testbed-node-5] => (item={'id': '8fa33a5a562c972b0fc29d677e02b846724bb0d2cb03f3ae77c29c8680044360', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up 20 minutes'})  2026-04-02 01:15:50.945527 | orchestrator | skipping: [testbed-node-5] => (item={'id': '287806ec01552c797bcfc282de3700c1504c3acff128d6d63e6b1579ae796647', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up 21 minutes'})  2026-04-02 01:15:50.945536 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'f1fe903debd962f0f47a43859b965a2441e6c1eeded489182525816086503789', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up 22 minutes'})  2026-04-02 01:15:50.945557 | orchestrator | ok: [testbed-node-5] => (item={'id': '1611773fcfeb4f8d5d6bffb992efbd08875c4ea3d47ab114310a55cd47e95c44', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up 23 minutes'}) 2026-04-02 01:15:50.945566 | orchestrator | ok: [testbed-node-5] => (item={'id': '93eacc1e250c8ad7eb5c813b1a45f49bf2f42a491dd39b34427c64b13e16ab6d', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up 23 minutes'}) 2026-04-02 01:15:50.945574 | orchestrator | skipping: [testbed-node-5] => (item={'id': '5f6bd65ad10047d6e8573439adcefa8949856978147a3054f9676d63a6307bf2', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 25 minutes'})  2026-04-02 01:15:50.945584 | orchestrator | skipping: [testbed-node-5] => (item={'id': '2562fe8d5aabdc876ee66be98b0f318ac6962fe97e3d659fb04e5d071fab4e7f', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 27 minutes (healthy)'})  2026-04-02 01:15:50.945592 | orchestrator | skipping: [testbed-node-5] => (item={'id': '515b8107f4d5c83bd6f2675f6998b36e4edc77b181a9095a652c2a2f006e30d3', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 27 minutes (healthy)'})  2026-04-02 01:15:50.945612 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'a7be876b91d8a41eaa2833d8084c249d10102886dfd767096ee8fca1d3c1936b', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 28 minutes'})  2026-04-02 01:15:50.945620 | orchestrator | skipping: [testbed-node-5] => (item={'id': '1ef00cc59494d8fee33feaec647f64b2f14b60fea4e085376682b076153e0413', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 28 minutes'})  2026-04-02 01:15:50.945636 | orchestrator | skipping: [testbed-node-5] => (item={'id': '5ac311a6672d26d4b8c2bc6228f1856efcd0a26e134e9c4e1a2b53d31045bc30', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 28 minutes'})  2026-04-02 01:16:03.731586 | orchestrator | 2026-04-02 01:16:03.731668 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2026-04-02 01:16:03.731684 | orchestrator | Thursday 02 April 2026 01:15:51 +0000 (0:00:00.649) 0:00:04.859 ******** 2026-04-02 01:16:03.731691 | orchestrator | ok: [testbed-node-3] 2026-04-02 01:16:03.731698 | orchestrator | ok: [testbed-node-4] 2026-04-02 01:16:03.731705 | orchestrator | ok: [testbed-node-5] 2026-04-02 01:16:03.731711 | orchestrator | 2026-04-02 01:16:03.731717 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2026-04-02 01:16:03.731723 | orchestrator | Thursday 02 April 2026 01:15:51 +0000 (0:00:00.302) 0:00:05.161 ******** 2026-04-02 01:16:03.731730 | orchestrator | skipping: [testbed-node-3] 2026-04-02 01:16:03.731737 | orchestrator | skipping: [testbed-node-4] 2026-04-02 01:16:03.731744 | orchestrator | skipping: [testbed-node-5] 2026-04-02 01:16:03.731751 | orchestrator | 2026-04-02 01:16:03.731758 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2026-04-02 01:16:03.731764 | orchestrator | Thursday 02 April 2026 01:15:51 +0000 (0:00:00.283) 0:00:05.444 ******** 2026-04-02 01:16:03.731769 | orchestrator | ok: [testbed-node-3] 2026-04-02 01:16:03.731775 | orchestrator | ok: [testbed-node-4] 2026-04-02 01:16:03.731781 | orchestrator | ok: [testbed-node-5] 2026-04-02 01:16:03.731787 | orchestrator | 2026-04-02 01:16:03.731793 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-02 01:16:03.731799 | orchestrator | Thursday 02 April 2026 01:15:52 +0000 (0:00:00.315) 0:00:05.760 ******** 2026-04-02 01:16:03.731814 | orchestrator | ok: [testbed-node-3] 2026-04-02 01:16:03.731821 | orchestrator | ok: [testbed-node-4] 2026-04-02 01:16:03.731829 | orchestrator | ok: [testbed-node-5] 2026-04-02 01:16:03.731834 | orchestrator | 2026-04-02 01:16:03.731841 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2026-04-02 01:16:03.731848 | orchestrator | Thursday 02 April 2026 01:15:52 +0000 (0:00:00.461) 0:00:06.222 ******** 2026-04-02 01:16:03.731854 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2026-04-02 01:16:03.731862 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2026-04-02 01:16:03.731869 | orchestrator | skipping: [testbed-node-3] 2026-04-02 01:16:03.731876 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2026-04-02 01:16:03.731883 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2026-04-02 01:16:03.731888 | orchestrator | skipping: [testbed-node-4] 2026-04-02 01:16:03.731892 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2026-04-02 01:16:03.731896 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2026-04-02 01:16:03.731900 | orchestrator | skipping: [testbed-node-5] 2026-04-02 01:16:03.731904 | orchestrator | 2026-04-02 01:16:03.731908 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2026-04-02 01:16:03.731912 | orchestrator | Thursday 02 April 2026 01:15:52 +0000 (0:00:00.304) 0:00:06.526 ******** 2026-04-02 01:16:03.731933 | orchestrator | ok: [testbed-node-3] 2026-04-02 01:16:03.731937 | orchestrator | ok: [testbed-node-4] 2026-04-02 01:16:03.731940 | orchestrator | ok: [testbed-node-5] 2026-04-02 01:16:03.731944 | orchestrator | 2026-04-02 01:16:03.731948 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-04-02 01:16:03.731952 | orchestrator | Thursday 02 April 2026 01:15:53 +0000 (0:00:00.298) 0:00:06.825 ******** 2026-04-02 01:16:03.731956 | orchestrator | skipping: [testbed-node-3] 2026-04-02 01:16:03.731959 | orchestrator | skipping: [testbed-node-4] 2026-04-02 01:16:03.731963 | orchestrator | skipping: [testbed-node-5] 2026-04-02 01:16:03.731967 | orchestrator | 2026-04-02 01:16:03.731971 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-04-02 01:16:03.731975 | orchestrator | Thursday 02 April 2026 01:15:53 +0000 (0:00:00.324) 0:00:07.149 ******** 2026-04-02 01:16:03.731979 | orchestrator | skipping: [testbed-node-3] 2026-04-02 01:16:03.731983 | orchestrator | skipping: [testbed-node-4] 2026-04-02 01:16:03.731986 | orchestrator | skipping: [testbed-node-5] 2026-04-02 01:16:03.731990 | orchestrator | 2026-04-02 01:16:03.731994 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2026-04-02 01:16:03.731998 | orchestrator | Thursday 02 April 2026 01:15:53 +0000 (0:00:00.442) 0:00:07.592 ******** 2026-04-02 01:16:03.732002 | orchestrator | ok: [testbed-node-3] 2026-04-02 01:16:03.732006 | orchestrator | ok: [testbed-node-4] 2026-04-02 01:16:03.732009 | orchestrator | ok: [testbed-node-5] 2026-04-02 01:16:03.732013 | orchestrator | 2026-04-02 01:16:03.732017 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-02 01:16:03.732021 | orchestrator | Thursday 02 April 2026 01:15:54 +0000 (0:00:00.307) 0:00:07.899 ******** 2026-04-02 01:16:03.732024 | orchestrator | skipping: [testbed-node-3] 2026-04-02 01:16:03.732028 | orchestrator | 2026-04-02 01:16:03.732032 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-02 01:16:03.732043 | orchestrator | Thursday 02 April 2026 01:15:54 +0000 (0:00:00.259) 0:00:08.159 ******** 2026-04-02 01:16:03.732047 | orchestrator | skipping: [testbed-node-3] 2026-04-02 01:16:03.732050 | orchestrator | 2026-04-02 01:16:03.732054 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-02 01:16:03.732058 | orchestrator | Thursday 02 April 2026 01:15:54 +0000 (0:00:00.256) 0:00:08.415 ******** 2026-04-02 01:16:03.732062 | orchestrator | skipping: [testbed-node-3] 2026-04-02 01:16:03.732065 | orchestrator | 2026-04-02 01:16:03.732069 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-02 01:16:03.732073 | orchestrator | Thursday 02 April 2026 01:15:55 +0000 (0:00:00.240) 0:00:08.656 ******** 2026-04-02 01:16:03.732076 | orchestrator | 2026-04-02 01:16:03.732080 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-02 01:16:03.732084 | orchestrator | Thursday 02 April 2026 01:15:55 +0000 (0:00:00.083) 0:00:08.739 ******** 2026-04-02 01:16:03.732087 | orchestrator | 2026-04-02 01:16:03.732091 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-02 01:16:03.732106 | orchestrator | Thursday 02 April 2026 01:15:55 +0000 (0:00:00.068) 0:00:08.808 ******** 2026-04-02 01:16:03.732110 | orchestrator | 2026-04-02 01:16:03.732114 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-02 01:16:03.732118 | orchestrator | Thursday 02 April 2026 01:15:55 +0000 (0:00:00.069) 0:00:08.877 ******** 2026-04-02 01:16:03.732122 | orchestrator | skipping: [testbed-node-3] 2026-04-02 01:16:03.732125 | orchestrator | 2026-04-02 01:16:03.732129 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2026-04-02 01:16:03.732133 | orchestrator | Thursday 02 April 2026 01:15:55 +0000 (0:00:00.644) 0:00:09.521 ******** 2026-04-02 01:16:03.732136 | orchestrator | skipping: [testbed-node-3] 2026-04-02 01:16:03.732140 | orchestrator | 2026-04-02 01:16:03.732144 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-02 01:16:03.732147 | orchestrator | Thursday 02 April 2026 01:15:56 +0000 (0:00:00.244) 0:00:09.765 ******** 2026-04-02 01:16:03.732156 | orchestrator | ok: [testbed-node-3] 2026-04-02 01:16:03.732161 | orchestrator | ok: [testbed-node-4] 2026-04-02 01:16:03.732165 | orchestrator | ok: [testbed-node-5] 2026-04-02 01:16:03.732170 | orchestrator | 2026-04-02 01:16:03.732174 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2026-04-02 01:16:03.732179 | orchestrator | Thursday 02 April 2026 01:15:56 +0000 (0:00:00.274) 0:00:10.040 ******** 2026-04-02 01:16:03.732183 | orchestrator | ok: [testbed-node-3] 2026-04-02 01:16:03.732188 | orchestrator | 2026-04-02 01:16:03.732192 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2026-04-02 01:16:03.732197 | orchestrator | Thursday 02 April 2026 01:15:56 +0000 (0:00:00.219) 0:00:10.259 ******** 2026-04-02 01:16:03.732202 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-02 01:16:03.732206 | orchestrator | 2026-04-02 01:16:03.732211 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2026-04-02 01:16:03.732215 | orchestrator | Thursday 02 April 2026 01:15:58 +0000 (0:00:02.041) 0:00:12.301 ******** 2026-04-02 01:16:03.732220 | orchestrator | ok: [testbed-node-3] 2026-04-02 01:16:03.732224 | orchestrator | 2026-04-02 01:16:03.732228 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2026-04-02 01:16:03.732233 | orchestrator | Thursday 02 April 2026 01:15:58 +0000 (0:00:00.134) 0:00:12.435 ******** 2026-04-02 01:16:03.732237 | orchestrator | ok: [testbed-node-3] 2026-04-02 01:16:03.732241 | orchestrator | 2026-04-02 01:16:03.732246 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2026-04-02 01:16:03.732251 | orchestrator | Thursday 02 April 2026 01:15:59 +0000 (0:00:00.292) 0:00:12.728 ******** 2026-04-02 01:16:03.732255 | orchestrator | skipping: [testbed-node-3] 2026-04-02 01:16:03.732260 | orchestrator | 2026-04-02 01:16:03.732265 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2026-04-02 01:16:03.732280 | orchestrator | Thursday 02 April 2026 01:15:59 +0000 (0:00:00.103) 0:00:12.831 ******** 2026-04-02 01:16:03.732284 | orchestrator | ok: [testbed-node-3] 2026-04-02 01:16:03.732289 | orchestrator | 2026-04-02 01:16:03.732299 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-02 01:16:03.732304 | orchestrator | Thursday 02 April 2026 01:15:59 +0000 (0:00:00.123) 0:00:12.955 ******** 2026-04-02 01:16:03.732308 | orchestrator | ok: [testbed-node-3] 2026-04-02 01:16:03.732312 | orchestrator | ok: [testbed-node-4] 2026-04-02 01:16:03.732317 | orchestrator | ok: [testbed-node-5] 2026-04-02 01:16:03.732321 | orchestrator | 2026-04-02 01:16:03.732326 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2026-04-02 01:16:03.732331 | orchestrator | Thursday 02 April 2026 01:15:59 +0000 (0:00:00.445) 0:00:13.400 ******** 2026-04-02 01:16:03.732335 | orchestrator | changed: [testbed-node-3] 2026-04-02 01:16:03.732340 | orchestrator | changed: [testbed-node-4] 2026-04-02 01:16:03.732345 | orchestrator | changed: [testbed-node-5] 2026-04-02 01:16:03.732349 | orchestrator | 2026-04-02 01:16:03.732353 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2026-04-02 01:16:03.732358 | orchestrator | Thursday 02 April 2026 01:16:01 +0000 (0:00:01.755) 0:00:15.156 ******** 2026-04-02 01:16:03.732362 | orchestrator | ok: [testbed-node-3] 2026-04-02 01:16:03.732367 | orchestrator | ok: [testbed-node-4] 2026-04-02 01:16:03.732371 | orchestrator | ok: [testbed-node-5] 2026-04-02 01:16:03.732376 | orchestrator | 2026-04-02 01:16:03.732380 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2026-04-02 01:16:03.732385 | orchestrator | Thursday 02 April 2026 01:16:01 +0000 (0:00:00.282) 0:00:15.438 ******** 2026-04-02 01:16:03.732390 | orchestrator | ok: [testbed-node-3] 2026-04-02 01:16:03.732394 | orchestrator | ok: [testbed-node-4] 2026-04-02 01:16:03.732399 | orchestrator | ok: [testbed-node-5] 2026-04-02 01:16:03.732426 | orchestrator | 2026-04-02 01:16:03.732431 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2026-04-02 01:16:03.732435 | orchestrator | Thursday 02 April 2026 01:16:02 +0000 (0:00:00.472) 0:00:15.911 ******** 2026-04-02 01:16:03.732444 | orchestrator | skipping: [testbed-node-3] 2026-04-02 01:16:03.732448 | orchestrator | skipping: [testbed-node-4] 2026-04-02 01:16:03.732453 | orchestrator | skipping: [testbed-node-5] 2026-04-02 01:16:03.732457 | orchestrator | 2026-04-02 01:16:03.732462 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2026-04-02 01:16:03.732470 | orchestrator | Thursday 02 April 2026 01:16:02 +0000 (0:00:00.455) 0:00:16.366 ******** 2026-04-02 01:16:03.732474 | orchestrator | ok: [testbed-node-3] 2026-04-02 01:16:03.732479 | orchestrator | ok: [testbed-node-4] 2026-04-02 01:16:03.732483 | orchestrator | ok: [testbed-node-5] 2026-04-02 01:16:03.732488 | orchestrator | 2026-04-02 01:16:03.732492 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2026-04-02 01:16:03.732497 | orchestrator | Thursday 02 April 2026 01:16:03 +0000 (0:00:00.297) 0:00:16.664 ******** 2026-04-02 01:16:03.732501 | orchestrator | skipping: [testbed-node-3] 2026-04-02 01:16:03.732505 | orchestrator | skipping: [testbed-node-4] 2026-04-02 01:16:03.732509 | orchestrator | skipping: [testbed-node-5] 2026-04-02 01:16:03.732513 | orchestrator | 2026-04-02 01:16:03.732516 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2026-04-02 01:16:03.732520 | orchestrator | Thursday 02 April 2026 01:16:03 +0000 (0:00:00.266) 0:00:16.931 ******** 2026-04-02 01:16:03.732524 | orchestrator | skipping: [testbed-node-3] 2026-04-02 01:16:03.732527 | orchestrator | skipping: [testbed-node-4] 2026-04-02 01:16:03.732531 | orchestrator | skipping: [testbed-node-5] 2026-04-02 01:16:03.732535 | orchestrator | 2026-04-02 01:16:03.732541 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-02 01:16:10.825135 | orchestrator | Thursday 02 April 2026 01:16:03 +0000 (0:00:00.452) 0:00:17.383 ******** 2026-04-02 01:16:10.825235 | orchestrator | ok: [testbed-node-3] 2026-04-02 01:16:10.825242 | orchestrator | ok: [testbed-node-4] 2026-04-02 01:16:10.825246 | orchestrator | ok: [testbed-node-5] 2026-04-02 01:16:10.825250 | orchestrator | 2026-04-02 01:16:10.825255 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2026-04-02 01:16:10.825259 | orchestrator | Thursday 02 April 2026 01:16:04 +0000 (0:00:00.483) 0:00:17.866 ******** 2026-04-02 01:16:10.825263 | orchestrator | ok: [testbed-node-3] 2026-04-02 01:16:10.825267 | orchestrator | ok: [testbed-node-4] 2026-04-02 01:16:10.825271 | orchestrator | ok: [testbed-node-5] 2026-04-02 01:16:10.825274 | orchestrator | 2026-04-02 01:16:10.825278 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2026-04-02 01:16:10.825282 | orchestrator | Thursday 02 April 2026 01:16:04 +0000 (0:00:00.484) 0:00:18.351 ******** 2026-04-02 01:16:10.825286 | orchestrator | ok: [testbed-node-3] 2026-04-02 01:16:10.825290 | orchestrator | ok: [testbed-node-4] 2026-04-02 01:16:10.825293 | orchestrator | ok: [testbed-node-5] 2026-04-02 01:16:10.825297 | orchestrator | 2026-04-02 01:16:10.825301 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2026-04-02 01:16:10.825305 | orchestrator | Thursday 02 April 2026 01:16:04 +0000 (0:00:00.294) 0:00:18.646 ******** 2026-04-02 01:16:10.825309 | orchestrator | skipping: [testbed-node-3] 2026-04-02 01:16:10.825313 | orchestrator | skipping: [testbed-node-4] 2026-04-02 01:16:10.825317 | orchestrator | skipping: [testbed-node-5] 2026-04-02 01:16:10.825321 | orchestrator | 2026-04-02 01:16:10.825325 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2026-04-02 01:16:10.825328 | orchestrator | Thursday 02 April 2026 01:16:05 +0000 (0:00:00.481) 0:00:19.127 ******** 2026-04-02 01:16:10.825332 | orchestrator | ok: [testbed-node-3] 2026-04-02 01:16:10.825336 | orchestrator | ok: [testbed-node-4] 2026-04-02 01:16:10.825339 | orchestrator | ok: [testbed-node-5] 2026-04-02 01:16:10.825343 | orchestrator | 2026-04-02 01:16:10.825347 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-04-02 01:16:10.825351 | orchestrator | Thursday 02 April 2026 01:16:05 +0000 (0:00:00.305) 0:00:19.432 ******** 2026-04-02 01:16:10.825354 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-02 01:16:10.825374 | orchestrator | 2026-04-02 01:16:10.825378 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-04-02 01:16:10.825382 | orchestrator | Thursday 02 April 2026 01:16:06 +0000 (0:00:00.245) 0:00:19.678 ******** 2026-04-02 01:16:10.825386 | orchestrator | skipping: [testbed-node-3] 2026-04-02 01:16:10.825426 | orchestrator | 2026-04-02 01:16:10.825433 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-02 01:16:10.825438 | orchestrator | Thursday 02 April 2026 01:16:06 +0000 (0:00:00.235) 0:00:19.913 ******** 2026-04-02 01:16:10.825447 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-02 01:16:10.825455 | orchestrator | 2026-04-02 01:16:10.825462 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-02 01:16:10.825468 | orchestrator | Thursday 02 April 2026 01:16:07 +0000 (0:00:01.694) 0:00:21.608 ******** 2026-04-02 01:16:10.825474 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-02 01:16:10.825480 | orchestrator | 2026-04-02 01:16:10.825487 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-02 01:16:10.825493 | orchestrator | Thursday 02 April 2026 01:16:08 +0000 (0:00:00.275) 0:00:21.883 ******** 2026-04-02 01:16:10.825499 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-02 01:16:10.825505 | orchestrator | 2026-04-02 01:16:10.825511 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-02 01:16:10.825517 | orchestrator | Thursday 02 April 2026 01:16:08 +0000 (0:00:00.244) 0:00:22.127 ******** 2026-04-02 01:16:10.825523 | orchestrator | 2026-04-02 01:16:10.825529 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-02 01:16:10.825535 | orchestrator | Thursday 02 April 2026 01:16:08 +0000 (0:00:00.066) 0:00:22.194 ******** 2026-04-02 01:16:10.825541 | orchestrator | 2026-04-02 01:16:10.825547 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-02 01:16:10.825554 | orchestrator | Thursday 02 April 2026 01:16:08 +0000 (0:00:00.242) 0:00:22.436 ******** 2026-04-02 01:16:10.825560 | orchestrator | 2026-04-02 01:16:10.825567 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-04-02 01:16:10.825572 | orchestrator | Thursday 02 April 2026 01:16:08 +0000 (0:00:00.085) 0:00:22.522 ******** 2026-04-02 01:16:10.825576 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-02 01:16:10.825580 | orchestrator | 2026-04-02 01:16:10.825584 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-02 01:16:10.825588 | orchestrator | Thursday 02 April 2026 01:16:10 +0000 (0:00:01.233) 0:00:23.755 ******** 2026-04-02 01:16:10.825591 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2026-04-02 01:16:10.825595 | orchestrator |  "msg": [ 2026-04-02 01:16:10.825601 | orchestrator |  "Validator run completed.", 2026-04-02 01:16:10.825605 | orchestrator |  "You can find the report file here:", 2026-04-02 01:16:10.825609 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2026-04-02T01:15:47+00:00-report.json", 2026-04-02 01:16:10.825613 | orchestrator |  "on the following host:", 2026-04-02 01:16:10.825617 | orchestrator |  "testbed-manager" 2026-04-02 01:16:10.825631 | orchestrator |  ] 2026-04-02 01:16:10.825641 | orchestrator | } 2026-04-02 01:16:10.825645 | orchestrator | 2026-04-02 01:16:10.825649 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-02 01:16:10.825655 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-02 01:16:10.825663 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-02 01:16:10.825687 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-02 01:16:10.825703 | orchestrator | 2026-04-02 01:16:10.825709 | orchestrator | 2026-04-02 01:16:10.825715 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-02 01:16:10.825786 | orchestrator | Thursday 02 April 2026 01:16:10 +0000 (0:00:00.442) 0:00:24.198 ******** 2026-04-02 01:16:10.825807 | orchestrator | =============================================================================== 2026-04-02 01:16:10.825828 | orchestrator | Get ceph osd tree ------------------------------------------------------- 2.04s 2026-04-02 01:16:10.825845 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 1.76s 2026-04-02 01:16:10.825858 | orchestrator | Aggregate test results step one ----------------------------------------- 1.69s 2026-04-02 01:16:10.825873 | orchestrator | Write report file ------------------------------------------------------- 1.23s 2026-04-02 01:16:10.825888 | orchestrator | Get timestamp for report file ------------------------------------------- 0.97s 2026-04-02 01:16:10.825903 | orchestrator | Create report output directory ------------------------------------------ 0.70s 2026-04-02 01:16:10.825917 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.65s 2026-04-02 01:16:10.825930 | orchestrator | Print report file information ------------------------------------------- 0.64s 2026-04-02 01:16:10.825945 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.48s 2026-04-02 01:16:10.825958 | orchestrator | Prepare test data ------------------------------------------------------- 0.48s 2026-04-02 01:16:10.825973 | orchestrator | Fail test if any sub test failed ---------------------------------------- 0.48s 2026-04-02 01:16:10.825987 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.47s 2026-04-02 01:16:10.826002 | orchestrator | Prepare test data ------------------------------------------------------- 0.46s 2026-04-02 01:16:10.826095 | orchestrator | Fail if count of encrypted OSDs does not match -------------------------- 0.46s 2026-04-02 01:16:10.826110 | orchestrator | Pass if count of unencrypted OSDs equals count of OSDs ------------------ 0.45s 2026-04-02 01:16:10.826125 | orchestrator | Prepare test data ------------------------------------------------------- 0.44s 2026-04-02 01:16:10.826141 | orchestrator | Print report file information ------------------------------------------- 0.44s 2026-04-02 01:16:10.826155 | orchestrator | Set test result to failed if an OSD is not running ---------------------- 0.44s 2026-04-02 01:16:10.826170 | orchestrator | Calculate OSD devices for each host ------------------------------------- 0.42s 2026-04-02 01:16:10.826185 | orchestrator | Flush handlers ---------------------------------------------------------- 0.39s 2026-04-02 01:16:11.005950 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2026-04-02 01:16:11.011809 | orchestrator | + set -e 2026-04-02 01:16:11.011888 | orchestrator | + source /opt/manager-vars.sh 2026-04-02 01:16:11.011898 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-02 01:16:11.011905 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-02 01:16:11.011912 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-02 01:16:11.011918 | orchestrator | ++ CEPH_VERSION=reef 2026-04-02 01:16:11.011926 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-02 01:16:11.011934 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-02 01:16:11.011960 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-02 01:16:11.011968 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-02 01:16:11.011975 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-02 01:16:11.011982 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-02 01:16:11.011989 | orchestrator | ++ export ARA=false 2026-04-02 01:16:11.011997 | orchestrator | ++ ARA=false 2026-04-02 01:16:11.012003 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-02 01:16:11.012010 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-02 01:16:11.012017 | orchestrator | ++ export TEMPEST=true 2026-04-02 01:16:11.012024 | orchestrator | ++ TEMPEST=true 2026-04-02 01:16:11.012031 | orchestrator | ++ export IS_ZUUL=true 2026-04-02 01:16:11.012038 | orchestrator | ++ IS_ZUUL=true 2026-04-02 01:16:11.012045 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.251 2026-04-02 01:16:11.012052 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.251 2026-04-02 01:16:11.012059 | orchestrator | ++ export EXTERNAL_API=false 2026-04-02 01:16:11.012066 | orchestrator | ++ EXTERNAL_API=false 2026-04-02 01:16:11.012072 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-02 01:16:11.012078 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-02 01:16:11.012111 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-02 01:16:11.012118 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-02 01:16:11.012124 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-02 01:16:11.012131 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-02 01:16:11.012137 | orchestrator | + source /etc/os-release 2026-04-02 01:16:11.012144 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.4 LTS' 2026-04-02 01:16:11.012150 | orchestrator | ++ NAME=Ubuntu 2026-04-02 01:16:11.012156 | orchestrator | ++ VERSION_ID=24.04 2026-04-02 01:16:11.012164 | orchestrator | ++ VERSION='24.04.4 LTS (Noble Numbat)' 2026-04-02 01:16:11.012171 | orchestrator | ++ VERSION_CODENAME=noble 2026-04-02 01:16:11.012178 | orchestrator | ++ ID=ubuntu 2026-04-02 01:16:11.012184 | orchestrator | ++ ID_LIKE=debian 2026-04-02 01:16:11.012190 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2026-04-02 01:16:11.012196 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2026-04-02 01:16:11.012202 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2026-04-02 01:16:11.012209 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2026-04-02 01:16:11.012216 | orchestrator | ++ UBUNTU_CODENAME=noble 2026-04-02 01:16:11.012222 | orchestrator | ++ LOGO=ubuntu-logo 2026-04-02 01:16:11.012228 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2026-04-02 01:16:11.012268 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2026-04-02 01:16:11.012277 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-04-02 01:16:11.045137 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-04-02 01:16:34.956564 | orchestrator | 2026-04-02 01:16:34.956653 | orchestrator | # Status of Elasticsearch 2026-04-02 01:16:34.956666 | orchestrator | 2026-04-02 01:16:34.956674 | orchestrator | + pushd /opt/configuration/contrib 2026-04-02 01:16:34.956682 | orchestrator | + echo 2026-04-02 01:16:34.956689 | orchestrator | + echo '# Status of Elasticsearch' 2026-04-02 01:16:34.956695 | orchestrator | + echo 2026-04-02 01:16:34.956701 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2026-04-02 01:16:35.136075 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2026-04-02 01:16:35.136180 | orchestrator | 2026-04-02 01:16:35.136188 | orchestrator | # Status of MariaDB 2026-04-02 01:16:35.136194 | orchestrator | 2026-04-02 01:16:35.136200 | orchestrator | + echo 2026-04-02 01:16:35.136207 | orchestrator | + echo '# Status of MariaDB' 2026-04-02 01:16:35.136213 | orchestrator | + echo 2026-04-02 01:16:35.136258 | orchestrator | ++ semver latest 10.0.0-0 2026-04-02 01:16:35.171713 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-02 01:16:35.171785 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-02 01:16:35.171791 | orchestrator | + osism status database 2026-04-02 01:16:36.749403 | orchestrator | 2026-04-02 01:16:36 | ERROR  | Unable to get ansible vault password 2026-04-02 01:16:36.749472 | orchestrator | 2026-04-02 01:16:36 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-02 01:16:36.749483 | orchestrator | 2026-04-02 01:16:36 | ERROR  | Dropping encrypted entries 2026-04-02 01:16:36.782528 | orchestrator | 2026-04-02 01:16:36 | INFO  | Connecting to MariaDB at 192.168.16.9 as root_shard_0... 2026-04-02 01:16:36.793718 | orchestrator | 2026-04-02 01:16:36 | INFO  | Cluster Status: Primary 2026-04-02 01:16:36.793860 | orchestrator | 2026-04-02 01:16:36 | INFO  | Connected: ON 2026-04-02 01:16:36.793877 | orchestrator | 2026-04-02 01:16:36 | INFO  | Ready: ON 2026-04-02 01:16:36.793883 | orchestrator | 2026-04-02 01:16:36 | INFO  | Cluster Size: 3 2026-04-02 01:16:36.794418 | orchestrator | 2026-04-02 01:16:36 | INFO  | Local State: Synced 2026-04-02 01:16:36.794536 | orchestrator | 2026-04-02 01:16:36 | INFO  | Cluster State UUID: 5dc34c94-2e2e-11f1-ae7e-3f84a439ffc6 2026-04-02 01:16:36.794594 | orchestrator | 2026-04-02 01:16:36 | INFO  | Cluster Members: 192.168.16.11:3306,192.168.16.12:3306,192.168.16.10:3306 2026-04-02 01:16:36.794623 | orchestrator | 2026-04-02 01:16:36 | INFO  | Galera Version: 26.4.25(r7387a566) 2026-04-02 01:16:36.794629 | orchestrator | 2026-04-02 01:16:36 | INFO  | Local Node UUID: 91ebaa1f-2e2e-11f1-a63a-539250d062b3 2026-04-02 01:16:36.794635 | orchestrator | 2026-04-02 01:16:36 | INFO  | Flow Control Paused: 0.00% 2026-04-02 01:16:36.794640 | orchestrator | 2026-04-02 01:16:36 | INFO  | Recv Queue Avg: 0 2026-04-02 01:16:36.794646 | orchestrator | 2026-04-02 01:16:36 | INFO  | Send Queue Avg: 0.000755858 2026-04-02 01:16:36.794658 | orchestrator | 2026-04-02 01:16:36 | INFO  | Transactions: 4373 local commits, 6558 replicated, 72 received 2026-04-02 01:16:36.794663 | orchestrator | 2026-04-02 01:16:36 | INFO  | Conflicts: 0 cert failures, 0 bf aborts 2026-04-02 01:16:36.794669 | orchestrator | 2026-04-02 01:16:36 | INFO  | MariaDB Uptime: 21 minutes, 39 seconds 2026-04-02 01:16:36.794674 | orchestrator | 2026-04-02 01:16:36 | INFO  | Threads: 128 connected, 1 running 2026-04-02 01:16:36.794679 | orchestrator | 2026-04-02 01:16:36 | INFO  | Queries: 212345 total, 0 slow 2026-04-02 01:16:36.794684 | orchestrator | 2026-04-02 01:16:36 | INFO  | Aborted Connects: 148 2026-04-02 01:16:36.794689 | orchestrator | 2026-04-02 01:16:36 | INFO  | MariaDB Galera Cluster validation PASSED 2026-04-02 01:16:37.010479 | orchestrator | 2026-04-02 01:16:37.010572 | orchestrator | # Status of Prometheus 2026-04-02 01:16:37.010586 | orchestrator | 2026-04-02 01:16:37.010592 | orchestrator | + echo 2026-04-02 01:16:37.010598 | orchestrator | + echo '# Status of Prometheus' 2026-04-02 01:16:37.010605 | orchestrator | + echo 2026-04-02 01:16:37.010613 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2026-04-02 01:16:37.062860 | orchestrator | Unauthorized 2026-04-02 01:16:37.065907 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2026-04-02 01:16:37.123147 | orchestrator | Unauthorized 2026-04-02 01:16:37.126216 | orchestrator | 2026-04-02 01:16:37.126284 | orchestrator | # Status of RabbitMQ 2026-04-02 01:16:37.126291 | orchestrator | 2026-04-02 01:16:37.126295 | orchestrator | + echo 2026-04-02 01:16:37.126299 | orchestrator | + echo '# Status of RabbitMQ' 2026-04-02 01:16:37.126304 | orchestrator | + echo 2026-04-02 01:16:37.127572 | orchestrator | ++ semver latest 10.0.0-0 2026-04-02 01:16:37.187801 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-02 01:16:37.187871 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-02 01:16:37.187881 | orchestrator | + osism status messaging 2026-04-02 01:16:44.162486 | orchestrator | 2026-04-02 01:16:44 | ERROR  | Unable to get ansible vault password 2026-04-02 01:16:44.162561 | orchestrator | 2026-04-02 01:16:44 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-02 01:16:44.162984 | orchestrator | 2026-04-02 01:16:44 | ERROR  | Dropping encrypted entries 2026-04-02 01:16:44.196634 | orchestrator | 2026-04-02 01:16:44 | INFO  | [testbed-node-0] Connecting to RabbitMQ Management API at 192.168.16.10:15672 as openstack... 2026-04-02 01:16:44.262927 | orchestrator | 2026-04-02 01:16:44 | INFO  | [testbed-node-0] RabbitMQ Version: 3.13.7 2026-04-02 01:16:44.263035 | orchestrator | 2026-04-02 01:16:44 | INFO  | [testbed-node-0] Erlang Version: 26.2.5.15 2026-04-02 01:16:44.263050 | orchestrator | 2026-04-02 01:16:44 | INFO  | [testbed-node-0] Cluster Name: rabbit@testbed-node-0 2026-04-02 01:16:44.263169 | orchestrator | 2026-04-02 01:16:44 | INFO  | [testbed-node-0] Cluster Size: 3 2026-04-02 01:16:44.263443 | orchestrator | 2026-04-02 01:16:44 | INFO  | [testbed-node-0] Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-04-02 01:16:44.263717 | orchestrator | 2026-04-02 01:16:44 | INFO  | [testbed-node-0] Running Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-04-02 01:16:44.263975 | orchestrator | 2026-04-02 01:16:44 | INFO  | [testbed-node-0] Partitions: None (healthy) 2026-04-02 01:16:44.264303 | orchestrator | 2026-04-02 01:16:44 | INFO  | [testbed-node-0] Connections: 203, Channels: 202, Queues: 173 2026-04-02 01:16:44.264312 | orchestrator | 2026-04-02 01:16:44 | INFO  | [testbed-node-0] Messages: 232 total, 231 ready, 1 unacked 2026-04-02 01:16:44.264603 | orchestrator | 2026-04-02 01:16:44 | INFO  | [testbed-node-0] Message Rates: 6.2/s publish, 6.6/s deliver 2026-04-02 01:16:44.264747 | orchestrator | 2026-04-02 01:16:44 | INFO  | [testbed-node-0] Disk Free: 58.1 GB (limit: 0.0 GB) 2026-04-02 01:16:44.265022 | orchestrator | 2026-04-02 01:16:44 | INFO  | [testbed-node-0] Memory Used: 0.17 GB (limit: 12.54 GB) 2026-04-02 01:16:44.265218 | orchestrator | 2026-04-02 01:16:44 | INFO  | [testbed-node-0] File Descriptors: 111/1024 2026-04-02 01:16:44.265529 | orchestrator | 2026-04-02 01:16:44 | INFO  | [testbed-node-0] Sockets: 65/832 2026-04-02 01:16:44.265681 | orchestrator | 2026-04-02 01:16:44 | INFO  | [testbed-node-1] Connecting to RabbitMQ Management API at 192.168.16.11:15672 as openstack... 2026-04-02 01:16:44.321862 | orchestrator | 2026-04-02 01:16:44 | INFO  | [testbed-node-1] RabbitMQ Version: 3.13.7 2026-04-02 01:16:44.321941 | orchestrator | 2026-04-02 01:16:44 | INFO  | [testbed-node-1] Erlang Version: 26.2.5.15 2026-04-02 01:16:44.321950 | orchestrator | 2026-04-02 01:16:44 | INFO  | [testbed-node-1] Cluster Name: rabbit@testbed-node-1 2026-04-02 01:16:44.321967 | orchestrator | 2026-04-02 01:16:44 | INFO  | [testbed-node-1] Cluster Size: 3 2026-04-02 01:16:44.322201 | orchestrator | 2026-04-02 01:16:44 | INFO  | [testbed-node-1] Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-04-02 01:16:44.322213 | orchestrator | 2026-04-02 01:16:44 | INFO  | [testbed-node-1] Running Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-04-02 01:16:44.322316 | orchestrator | 2026-04-02 01:16:44 | INFO  | [testbed-node-1] Partitions: None (healthy) 2026-04-02 01:16:44.322515 | orchestrator | 2026-04-02 01:16:44 | INFO  | [testbed-node-1] Connections: 203, Channels: 202, Queues: 173 2026-04-02 01:16:44.322822 | orchestrator | 2026-04-02 01:16:44 | INFO  | [testbed-node-1] Messages: 232 total, 231 ready, 1 unacked 2026-04-02 01:16:44.322833 | orchestrator | 2026-04-02 01:16:44 | INFO  | [testbed-node-1] Message Rates: 6.2/s publish, 6.6/s deliver 2026-04-02 01:16:44.323036 | orchestrator | 2026-04-02 01:16:44 | INFO  | [testbed-node-1] Disk Free: 58.3 GB (limit: 0.0 GB) 2026-04-02 01:16:44.324146 | orchestrator | 2026-04-02 01:16:44 | INFO  | [testbed-node-1] Memory Used: 0.18 GB (limit: 12.54 GB) 2026-04-02 01:16:44.324182 | orchestrator | 2026-04-02 01:16:44 | INFO  | [testbed-node-1] File Descriptors: 113/1024 2026-04-02 01:16:44.324187 | orchestrator | 2026-04-02 01:16:44 | INFO  | [testbed-node-1] Sockets: 64/832 2026-04-02 01:16:44.324191 | orchestrator | 2026-04-02 01:16:44 | INFO  | [testbed-node-2] Connecting to RabbitMQ Management API at 192.168.16.12:15672 as openstack... 2026-04-02 01:16:44.380450 | orchestrator | 2026-04-02 01:16:44 | INFO  | [testbed-node-2] RabbitMQ Version: 3.13.7 2026-04-02 01:16:44.380517 | orchestrator | 2026-04-02 01:16:44 | INFO  | [testbed-node-2] Erlang Version: 26.2.5.15 2026-04-02 01:16:44.380577 | orchestrator | 2026-04-02 01:16:44 | INFO  | [testbed-node-2] Cluster Name: rabbit@testbed-node-2 2026-04-02 01:16:44.380805 | orchestrator | 2026-04-02 01:16:44 | INFO  | [testbed-node-2] Cluster Size: 3 2026-04-02 01:16:44.381127 | orchestrator | 2026-04-02 01:16:44 | INFO  | [testbed-node-2] Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-04-02 01:16:44.381503 | orchestrator | 2026-04-02 01:16:44 | INFO  | [testbed-node-2] Running Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-04-02 01:16:44.381512 | orchestrator | 2026-04-02 01:16:44 | INFO  | [testbed-node-2] Partitions: None (healthy) 2026-04-02 01:16:44.381900 | orchestrator | 2026-04-02 01:16:44 | INFO  | [testbed-node-2] Connections: 203, Channels: 202, Queues: 173 2026-04-02 01:16:44.382113 | orchestrator | 2026-04-02 01:16:44 | INFO  | [testbed-node-2] Messages: 232 total, 231 ready, 1 unacked 2026-04-02 01:16:44.382379 | orchestrator | 2026-04-02 01:16:44 | INFO  | [testbed-node-2] Message Rates: 6.2/s publish, 6.6/s deliver 2026-04-02 01:16:44.382671 | orchestrator | 2026-04-02 01:16:44 | INFO  | [testbed-node-2] Disk Free: 58.3 GB (limit: 0.0 GB) 2026-04-02 01:16:44.382913 | orchestrator | 2026-04-02 01:16:44 | INFO  | [testbed-node-2] Memory Used: 0.18 GB (limit: 12.54 GB) 2026-04-02 01:16:44.383197 | orchestrator | 2026-04-02 01:16:44 | INFO  | [testbed-node-2] File Descriptors: 120/1024 2026-04-02 01:16:44.383412 | orchestrator | 2026-04-02 01:16:44 | INFO  | [testbed-node-2] Sockets: 74/832 2026-04-02 01:16:44.383696 | orchestrator | 2026-04-02 01:16:44 | INFO  | RabbitMQ Cluster validation PASSED 2026-04-02 01:16:44.633401 | orchestrator | 2026-04-02 01:16:44.633479 | orchestrator | # Status of Redis 2026-04-02 01:16:44.633488 | orchestrator | 2026-04-02 01:16:44.633495 | orchestrator | + echo 2026-04-02 01:16:44.633502 | orchestrator | + echo '# Status of Redis' 2026-04-02 01:16:44.633511 | orchestrator | + echo 2026-04-02 01:16:44.633518 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2026-04-02 01:16:44.636794 | orchestrator | TCP OK - 0.001 second response time on 192.168.16.10 port 6379|time=0.001453s;;;0.000000;10.000000 2026-04-02 01:16:44.637047 | orchestrator | + popd 2026-04-02 01:16:44.637082 | orchestrator | + echo 2026-04-02 01:16:44.637087 | orchestrator | 2026-04-02 01:16:44.637120 | orchestrator | + echo '# Create backup of MariaDB database' 2026-04-02 01:16:44.637128 | orchestrator | # Create backup of MariaDB database 2026-04-02 01:16:44.637132 | orchestrator | + echo 2026-04-02 01:16:44.637136 | orchestrator | 2026-04-02 01:16:44.637140 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2026-04-02 01:16:45.913468 | orchestrator | 2026-04-02 01:16:45 | INFO  | Prepare task for execution of mariadb_backup. 2026-04-02 01:16:45.980572 | orchestrator | 2026-04-02 01:16:45 | INFO  | Task e6c03a14-337b-4218-b052-5a5e5828ea6e (mariadb_backup) was prepared for execution. 2026-04-02 01:16:45.980641 | orchestrator | 2026-04-02 01:16:45 | INFO  | It takes a moment until task e6c03a14-337b-4218-b052-5a5e5828ea6e (mariadb_backup) has been started and output is visible here. 2026-04-02 01:17:56.405098 | orchestrator | 2026-04-02 01:17:56.405175 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-02 01:17:56.405183 | orchestrator | 2026-04-02 01:17:56.405187 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-02 01:17:56.405192 | orchestrator | Thursday 02 April 2026 01:16:49 +0000 (0:00:00.226) 0:00:00.226 ******** 2026-04-02 01:17:56.405196 | orchestrator | ok: [testbed-node-0] 2026-04-02 01:17:56.405200 | orchestrator | ok: [testbed-node-1] 2026-04-02 01:17:56.405204 | orchestrator | ok: [testbed-node-2] 2026-04-02 01:17:56.405208 | orchestrator | 2026-04-02 01:17:56.405212 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-02 01:17:56.405215 | orchestrator | Thursday 02 April 2026 01:16:49 +0000 (0:00:00.311) 0:00:00.537 ******** 2026-04-02 01:17:56.405220 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-04-02 01:17:56.405224 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-04-02 01:17:56.405262 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-04-02 01:17:56.405284 | orchestrator | 2026-04-02 01:17:56.405288 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-04-02 01:17:56.405292 | orchestrator | 2026-04-02 01:17:56.405295 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-04-02 01:17:56.405299 | orchestrator | Thursday 02 April 2026 01:16:49 +0000 (0:00:00.414) 0:00:00.952 ******** 2026-04-02 01:17:56.405303 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-02 01:17:56.405307 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-02 01:17:56.405311 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-02 01:17:56.405315 | orchestrator | 2026-04-02 01:17:56.405318 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-02 01:17:56.405323 | orchestrator | Thursday 02 April 2026 01:16:50 +0000 (0:00:00.388) 0:00:01.340 ******** 2026-04-02 01:17:56.405327 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-02 01:17:56.405332 | orchestrator | 2026-04-02 01:17:56.405336 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2026-04-02 01:17:56.405340 | orchestrator | Thursday 02 April 2026 01:16:50 +0000 (0:00:00.685) 0:00:02.026 ******** 2026-04-02 01:17:56.405344 | orchestrator | ok: [testbed-node-1] 2026-04-02 01:17:56.405347 | orchestrator | ok: [testbed-node-0] 2026-04-02 01:17:56.405351 | orchestrator | ok: [testbed-node-2] 2026-04-02 01:17:56.405355 | orchestrator | 2026-04-02 01:17:56.405359 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2026-04-02 01:17:56.405362 | orchestrator | Thursday 02 April 2026 01:16:54 +0000 (0:00:03.203) 0:00:05.229 ******** 2026-04-02 01:17:56.405366 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:17:56.405371 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:17:56.405375 | orchestrator | changed: [testbed-node-0] 2026-04-02 01:17:56.405379 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-04-02 01:17:56.405383 | orchestrator | 2026-04-02 01:17:56.405396 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-04-02 01:17:56.405400 | orchestrator | skipping: no hosts matched 2026-04-02 01:17:56.405410 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-04-02 01:17:56.405414 | orchestrator | 2026-04-02 01:17:56.405418 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-04-02 01:17:56.405421 | orchestrator | skipping: no hosts matched 2026-04-02 01:17:56.405425 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-04-02 01:17:56.405429 | orchestrator | mariadb_bootstrap_restart 2026-04-02 01:17:56.405433 | orchestrator | 2026-04-02 01:17:56.405437 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-04-02 01:17:56.405441 | orchestrator | skipping: no hosts matched 2026-04-02 01:17:56.405444 | orchestrator | 2026-04-02 01:17:56.405448 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-04-02 01:17:56.405452 | orchestrator | 2026-04-02 01:17:56.405456 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-04-02 01:17:56.405459 | orchestrator | Thursday 02 April 2026 01:17:55 +0000 (0:01:01.567) 0:01:06.797 ******** 2026-04-02 01:17:56.405475 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:17:56.405479 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:17:56.405483 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:17:56.405487 | orchestrator | 2026-04-02 01:17:56.405490 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-04-02 01:17:56.405494 | orchestrator | Thursday 02 April 2026 01:17:55 +0000 (0:00:00.296) 0:01:07.093 ******** 2026-04-02 01:17:56.405498 | orchestrator | skipping: [testbed-node-0] 2026-04-02 01:17:56.405502 | orchestrator | skipping: [testbed-node-1] 2026-04-02 01:17:56.405505 | orchestrator | skipping: [testbed-node-2] 2026-04-02 01:17:56.405509 | orchestrator | 2026-04-02 01:17:56.405513 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-02 01:17:56.405521 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-02 01:17:56.405526 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-02 01:17:56.405530 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-02 01:17:56.405534 | orchestrator | 2026-04-02 01:17:56.405538 | orchestrator | 2026-04-02 01:17:56.405541 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-02 01:17:56.405545 | orchestrator | Thursday 02 April 2026 01:17:56 +0000 (0:00:00.205) 0:01:07.299 ******** 2026-04-02 01:17:56.405549 | orchestrator | =============================================================================== 2026-04-02 01:17:56.405553 | orchestrator | mariadb : Taking full database backup via Mariabackup ------------------ 61.57s 2026-04-02 01:17:56.405567 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.20s 2026-04-02 01:17:56.405571 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.69s 2026-04-02 01:17:56.405575 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.41s 2026-04-02 01:17:56.405578 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.39s 2026-04-02 01:17:56.405582 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.31s 2026-04-02 01:17:56.405586 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.30s 2026-04-02 01:17:56.405590 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.21s 2026-04-02 01:17:56.580981 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2026-04-02 01:17:56.586508 | orchestrator | + set -e 2026-04-02 01:17:56.586585 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-02 01:17:56.586593 | orchestrator | ++ export INTERACTIVE=false 2026-04-02 01:17:56.586598 | orchestrator | ++ INTERACTIVE=false 2026-04-02 01:17:56.586602 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-02 01:17:56.586607 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-02 01:17:56.586611 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-04-02 01:17:56.587511 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-04-02 01:17:56.594385 | orchestrator | 2026-04-02 01:17:56.594458 | orchestrator | # OpenStack endpoints 2026-04-02 01:17:56.594467 | orchestrator | 2026-04-02 01:17:56.594474 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-02 01:17:56.594481 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-02 01:17:56.594488 | orchestrator | + export OS_CLOUD=admin 2026-04-02 01:17:56.594494 | orchestrator | + OS_CLOUD=admin 2026-04-02 01:17:56.594502 | orchestrator | + echo 2026-04-02 01:17:56.594506 | orchestrator | + echo '# OpenStack endpoints' 2026-04-02 01:17:56.594510 | orchestrator | + echo 2026-04-02 01:17:56.594514 | orchestrator | + openstack endpoint list 2026-04-02 01:17:59.887024 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-04-02 01:17:59.887100 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2026-04-02 01:17:59.887106 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-04-02 01:17:59.887111 | orchestrator | | 0838f13c897c49dfb8e5d3ee3b9f3b69 | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2026-04-02 01:17:59.887115 | orchestrator | | 0918723dcb4a419c9097517c2a6a5a73 | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2026-04-02 01:17:59.887133 | orchestrator | | 1f79dcde2d224b3f90dc01bc42f7e001 | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2026-04-02 01:17:59.887153 | orchestrator | | 3645fa4901bb4c0d903ca16a9356b2f0 | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2026-04-02 01:17:59.887157 | orchestrator | | 6a0fd8f68942442783ba3bfdb112b120 | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2026-04-02 01:17:59.887161 | orchestrator | | 70fc888e8bc2495984cb6efbba81737d | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2026-04-02 01:17:59.887165 | orchestrator | | 7e80480464614d95b925f5fac28cc974 | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2026-04-02 01:17:59.887168 | orchestrator | | 7fc34295d0d440528d3eebf4656f0973 | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2026-04-02 01:17:59.887172 | orchestrator | | 86856822923d4958b2ba6827a405c148 | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2026-04-02 01:17:59.887176 | orchestrator | | 8c89bae290964782b7cb00abf0016873 | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-04-02 01:17:59.887179 | orchestrator | | 8ce8d2c8f2054d10a2dff0d1d35d9121 | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2026-04-02 01:17:59.887183 | orchestrator | | 9844ccba48b14cca857b40bc856591a2 | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2026-04-02 01:17:59.887187 | orchestrator | | a6c291cf360547ba802535fec61202de | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2026-04-02 01:17:59.887191 | orchestrator | | c4fe4c748aad41c2907ef47d20f661b2 | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2026-04-02 01:17:59.887194 | orchestrator | | d64cc35075fb42ecbd29d72de0a16a57 | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-04-02 01:17:59.887198 | orchestrator | | e2c1061a4c9b4692be03ba297073b34a | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-04-02 01:17:59.887204 | orchestrator | | e6d8eb1ea245410e9b5f60ff1d73112f | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2026-04-02 01:17:59.887210 | orchestrator | | e9149eede8774e3bb5d9e53ccd7cf8c3 | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2026-04-02 01:17:59.887215 | orchestrator | | ebdb592e529748d0a84dec563141b1cc | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2026-04-02 01:17:59.887221 | orchestrator | | f78d87e6fd1f49fd9c1d94e9f928cb64 | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2026-04-02 01:17:59.887333 | orchestrator | | fa46072670cb4da1be7946207a060353 | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2026-04-02 01:17:59.887341 | orchestrator | | feb76718c04a429a8118a1265b2360a5 | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-04-02 01:17:59.887347 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-04-02 01:18:00.129567 | orchestrator | 2026-04-02 01:18:00.129657 | orchestrator | # Cinder 2026-04-02 01:18:00.129667 | orchestrator | 2026-04-02 01:18:00.129675 | orchestrator | + echo 2026-04-02 01:18:00.129683 | orchestrator | + echo '# Cinder' 2026-04-02 01:18:00.129691 | orchestrator | + echo 2026-04-02 01:18:00.129698 | orchestrator | + openstack volume service list 2026-04-02 01:18:02.634462 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-04-02 01:18:02.634515 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2026-04-02 01:18:02.634521 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-04-02 01:18:02.634535 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2026-04-02T01:17:56.000000 | 2026-04-02 01:18:02.634539 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2026-04-02T01:17:56.000000 | 2026-04-02 01:18:02.634543 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2026-04-02T01:17:56.000000 | 2026-04-02 01:18:02.634547 | orchestrator | | cinder-volume | testbed-node-0@rbd-volumes | nova | enabled | up | 2026-04-02T01:17:56.000000 | 2026-04-02 01:18:02.634551 | orchestrator | | cinder-volume | testbed-node-1@rbd-volumes | nova | enabled | up | 2026-04-02T01:18:00.000000 | 2026-04-02 01:18:02.634555 | orchestrator | | cinder-volume | testbed-node-2@rbd-volumes | nova | enabled | up | 2026-04-02T01:18:00.000000 | 2026-04-02 01:18:02.634559 | orchestrator | | cinder-backup | testbed-node-0 | nova | enabled | up | 2026-04-02T01:18:02.000000 | 2026-04-02 01:18:02.634562 | orchestrator | | cinder-backup | testbed-node-2 | nova | enabled | up | 2026-04-02T01:17:54.000000 | 2026-04-02 01:18:02.634566 | orchestrator | | cinder-backup | testbed-node-1 | nova | enabled | up | 2026-04-02T01:17:54.000000 | 2026-04-02 01:18:02.634570 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-04-02 01:18:02.863761 | orchestrator | 2026-04-02 01:18:02.863811 | orchestrator | # Neutron 2026-04-02 01:18:02.863816 | orchestrator | 2026-04-02 01:18:02.863821 | orchestrator | + echo 2026-04-02 01:18:02.863825 | orchestrator | + echo '# Neutron' 2026-04-02 01:18:02.863830 | orchestrator | + echo 2026-04-02 01:18:02.863834 | orchestrator | + openstack network agent list 2026-04-02 01:18:05.652540 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-04-02 01:18:05.652653 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2026-04-02 01:18:05.652665 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-04-02 01:18:05.652671 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2026-04-02 01:18:05.652676 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2026-04-02 01:18:05.652680 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2026-04-02 01:18:05.652684 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2026-04-02 01:18:05.652688 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2026-04-02 01:18:05.652692 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2026-04-02 01:18:05.652696 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2026-04-02 01:18:05.652726 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2026-04-02 01:18:05.652734 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2026-04-02 01:18:05.652743 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-04-02 01:18:05.895117 | orchestrator | + openstack network service provider list 2026-04-02 01:18:08.423388 | orchestrator | +---------------+------+---------+ 2026-04-02 01:18:08.423502 | orchestrator | | Service Type | Name | Default | 2026-04-02 01:18:08.423521 | orchestrator | +---------------+------+---------+ 2026-04-02 01:18:08.423533 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2026-04-02 01:18:08.423545 | orchestrator | +---------------+------+---------+ 2026-04-02 01:18:08.686321 | orchestrator | 2026-04-02 01:18:08.686409 | orchestrator | # Nova 2026-04-02 01:18:08.686418 | orchestrator | 2026-04-02 01:18:08.686425 | orchestrator | + echo 2026-04-02 01:18:08.686433 | orchestrator | + echo '# Nova' 2026-04-02 01:18:08.686439 | orchestrator | + echo 2026-04-02 01:18:08.686446 | orchestrator | + openstack compute service list 2026-04-02 01:18:11.471336 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-04-02 01:18:11.471438 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2026-04-02 01:18:11.471450 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-04-02 01:18:11.471458 | orchestrator | | 711f35fb-13d1-4cc6-ac63-9079d838917b | nova-scheduler | testbed-node-1 | internal | enabled | up | 2026-04-02T01:18:07.000000 | 2026-04-02 01:18:11.471465 | orchestrator | | 47e18d97-8cb9-4d6c-9213-5ce1a5d815cb | nova-scheduler | testbed-node-2 | internal | enabled | up | 2026-04-02T01:18:07.000000 | 2026-04-02 01:18:11.471492 | orchestrator | | 2b8fa3d3-506c-4d4b-b604-c63649a3e8b9 | nova-scheduler | testbed-node-0 | internal | enabled | up | 2026-04-02T01:18:08.000000 | 2026-04-02 01:18:11.471500 | orchestrator | | ae32cd8e-cd9c-4a0e-b042-f29c1b4c93b2 | nova-conductor | testbed-node-1 | internal | enabled | up | 2026-04-02T01:18:09.000000 | 2026-04-02 01:18:11.471507 | orchestrator | | efff1b85-2b7c-43d6-86cc-fc85f8f54327 | nova-conductor | testbed-node-2 | internal | enabled | up | 2026-04-02T01:18:09.000000 | 2026-04-02 01:18:11.471513 | orchestrator | | 9e02db27-26b4-4a8f-845d-e34d56ff83c0 | nova-conductor | testbed-node-0 | internal | enabled | up | 2026-04-02T01:18:02.000000 | 2026-04-02 01:18:11.471519 | orchestrator | | 3915ba51-1326-42b2-9c9b-aa23487596cd | nova-compute | testbed-node-3 | nova | enabled | up | 2026-04-02T01:18:03.000000 | 2026-04-02 01:18:11.471527 | orchestrator | | 2818355b-3206-4718-9aed-b0b716539cb2 | nova-compute | testbed-node-4 | nova | enabled | up | 2026-04-02T01:18:04.000000 | 2026-04-02 01:18:11.471534 | orchestrator | | ed6ea30b-e85e-41e2-9e1d-b9033309ee6f | nova-compute | testbed-node-5 | nova | enabled | up | 2026-04-02T01:18:04.000000 | 2026-04-02 01:18:11.471541 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-04-02 01:18:11.729284 | orchestrator | + openstack hypervisor list 2026-04-02 01:18:14.284907 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-04-02 01:18:14.284985 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2026-04-02 01:18:14.284995 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-04-02 01:18:14.285003 | orchestrator | | 834b9663-1b32-4610-98fe-e1c1ba515608 | testbed-node-3 | QEMU | 192.168.16.13 | up | 2026-04-02 01:18:14.285010 | orchestrator | | e399f082-19ef-4b37-91b9-6df15e61d109 | testbed-node-4 | QEMU | 192.168.16.14 | up | 2026-04-02 01:18:14.285043 | orchestrator | | 1e7e278b-a7ca-4bf1-91dc-e64f1053f415 | testbed-node-5 | QEMU | 192.168.16.15 | up | 2026-04-02 01:18:14.285051 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-04-02 01:18:14.502619 | orchestrator | 2026-04-02 01:18:14.502705 | orchestrator | + echo 2026-04-02 01:18:14.502715 | orchestrator | + echo '# Run OpenStack test play' 2026-04-02 01:18:14.503558 | orchestrator | # Run OpenStack test play 2026-04-02 01:18:14.503603 | orchestrator | 2026-04-02 01:18:14.503611 | orchestrator | + echo 2026-04-02 01:18:14.503618 | orchestrator | + osism apply --environment openstack test 2026-04-02 01:18:15.651718 | orchestrator | 2026-04-02 01:18:15 | INFO  | Trying to run play test in environment openstack 2026-04-02 01:18:15.675828 | orchestrator | 2026-04-02 01:18:15 | INFO  | Prepare task for execution of test. 2026-04-02 01:18:15.734657 | orchestrator | 2026-04-02 01:18:15 | INFO  | Task 59a643a3-ca28-47a4-8b3c-5b28493b5872 (test) was prepared for execution. 2026-04-02 01:18:15.734726 | orchestrator | 2026-04-02 01:18:15 | INFO  | It takes a moment until task 59a643a3-ca28-47a4-8b3c-5b28493b5872 (test) has been started and output is visible here. 2026-04-02 01:21:26.490871 | orchestrator | 2026-04-02 01:21:26.490955 | orchestrator | PLAY [Create test project] ***************************************************** 2026-04-02 01:21:26.490963 | orchestrator | 2026-04-02 01:21:26.490967 | orchestrator | TASK [Create test domain] ****************************************************** 2026-04-02 01:21:26.490972 | orchestrator | Thursday 02 April 2026 01:18:18 +0000 (0:00:00.102) 0:00:00.102 ******** 2026-04-02 01:21:26.490976 | orchestrator | changed: [localhost] 2026-04-02 01:21:26.490980 | orchestrator | 2026-04-02 01:21:26.490984 | orchestrator | TASK [Create test-admin user] ************************************************** 2026-04-02 01:21:26.490988 | orchestrator | Thursday 02 April 2026 01:18:22 +0000 (0:00:03.576) 0:00:03.679 ******** 2026-04-02 01:21:26.490992 | orchestrator | changed: [localhost] 2026-04-02 01:21:26.490996 | orchestrator | 2026-04-02 01:21:26.491000 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2026-04-02 01:21:26.491004 | orchestrator | Thursday 02 April 2026 01:18:26 +0000 (0:00:04.372) 0:00:08.052 ******** 2026-04-02 01:21:26.491007 | orchestrator | changed: [localhost] 2026-04-02 01:21:26.491011 | orchestrator | 2026-04-02 01:21:26.491015 | orchestrator | TASK [Create test project] ***************************************************** 2026-04-02 01:21:26.491019 | orchestrator | Thursday 02 April 2026 01:18:33 +0000 (0:00:06.608) 0:00:14.661 ******** 2026-04-02 01:21:26.491023 | orchestrator | changed: [localhost] 2026-04-02 01:21:26.491032 | orchestrator | 2026-04-02 01:21:26.491040 | orchestrator | TASK [Create test user] ******************************************************** 2026-04-02 01:21:26.491044 | orchestrator | Thursday 02 April 2026 01:18:37 +0000 (0:00:04.211) 0:00:18.872 ******** 2026-04-02 01:21:26.491048 | orchestrator | changed: [localhost] 2026-04-02 01:21:26.491052 | orchestrator | 2026-04-02 01:21:26.491056 | orchestrator | TASK [Add member roles to user test] ******************************************* 2026-04-02 01:21:26.491060 | orchestrator | Thursday 02 April 2026 01:18:41 +0000 (0:00:04.113) 0:00:22.986 ******** 2026-04-02 01:21:26.491063 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2026-04-02 01:21:26.491067 | orchestrator | changed: [localhost] => (item=member) 2026-04-02 01:21:26.491072 | orchestrator | changed: [localhost] => (item=creator) 2026-04-02 01:21:26.491075 | orchestrator | 2026-04-02 01:21:26.491081 | orchestrator | TASK [Create test server group] ************************************************ 2026-04-02 01:21:26.491088 | orchestrator | Thursday 02 April 2026 01:18:52 +0000 (0:00:11.223) 0:00:34.210 ******** 2026-04-02 01:21:26.491096 | orchestrator | changed: [localhost] 2026-04-02 01:21:26.491105 | orchestrator | 2026-04-02 01:21:26.491111 | orchestrator | TASK [Create ssh security group] *********************************************** 2026-04-02 01:21:26.491118 | orchestrator | Thursday 02 April 2026 01:18:57 +0000 (0:00:04.370) 0:00:38.580 ******** 2026-04-02 01:21:26.491124 | orchestrator | changed: [localhost] 2026-04-02 01:21:26.491144 | orchestrator | 2026-04-02 01:21:26.491151 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2026-04-02 01:21:26.491157 | orchestrator | Thursday 02 April 2026 01:19:02 +0000 (0:00:04.700) 0:00:43.281 ******** 2026-04-02 01:21:26.491163 | orchestrator | changed: [localhost] 2026-04-02 01:21:26.491169 | orchestrator | 2026-04-02 01:21:26.491176 | orchestrator | TASK [Create icmp security group] ********************************************** 2026-04-02 01:21:26.491182 | orchestrator | Thursday 02 April 2026 01:19:06 +0000 (0:00:04.202) 0:00:47.484 ******** 2026-04-02 01:21:26.491189 | orchestrator | changed: [localhost] 2026-04-02 01:21:26.491196 | orchestrator | 2026-04-02 01:21:26.491202 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2026-04-02 01:21:26.491208 | orchestrator | Thursday 02 April 2026 01:19:10 +0000 (0:00:03.944) 0:00:51.428 ******** 2026-04-02 01:21:26.491214 | orchestrator | changed: [localhost] 2026-04-02 01:21:26.491220 | orchestrator | 2026-04-02 01:21:26.491226 | orchestrator | TASK [Create test keypair] ***************************************************** 2026-04-02 01:21:26.491233 | orchestrator | Thursday 02 April 2026 01:19:14 +0000 (0:00:04.047) 0:00:55.476 ******** 2026-04-02 01:21:26.491239 | orchestrator | changed: [localhost] 2026-04-02 01:21:26.491246 | orchestrator | 2026-04-02 01:21:26.491252 | orchestrator | TASK [Create test networks] **************************************************** 2026-04-02 01:21:26.491259 | orchestrator | Thursday 02 April 2026 01:19:18 +0000 (0:00:04.203) 0:00:59.679 ******** 2026-04-02 01:21:26.491266 | orchestrator | changed: [localhost] => (item={'name': 'test-1'}) 2026-04-02 01:21:26.491273 | orchestrator | changed: [localhost] => (item={'name': 'test-2'}) 2026-04-02 01:21:26.491280 | orchestrator | changed: [localhost] => (item={'name': 'test-3'}) 2026-04-02 01:21:26.491286 | orchestrator | 2026-04-02 01:21:26.491292 | orchestrator | TASK [Create test subnets] ***************************************************** 2026-04-02 01:21:26.491298 | orchestrator | Thursday 02 April 2026 01:19:32 +0000 (0:00:14.267) 0:01:13.946 ******** 2026-04-02 01:21:26.491304 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'subnet': 'subnet-test-1', 'cidr': '192.168.200.0/24'}) 2026-04-02 01:21:26.491319 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'subnet': 'subnet-test-2', 'cidr': '192.168.201.0/24'}) 2026-04-02 01:21:26.491331 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'subnet': 'subnet-test-3', 'cidr': '192.168.202.0/24'}) 2026-04-02 01:21:26.491338 | orchestrator | 2026-04-02 01:21:26.491345 | orchestrator | TASK [Create test routers] ***************************************************** 2026-04-02 01:21:26.491350 | orchestrator | Thursday 02 April 2026 01:19:49 +0000 (0:00:17.036) 0:01:30.982 ******** 2026-04-02 01:21:26.491354 | orchestrator | changed: [localhost] => (item={'router': 'router-test-1', 'subnet': 'subnet-test-1'}) 2026-04-02 01:21:26.491358 | orchestrator | changed: [localhost] => (item={'router': 'router-test-2', 'subnet': 'subnet-test-2'}) 2026-04-02 01:21:26.491362 | orchestrator | changed: [localhost] => (item={'router': 'router-test-3', 'subnet': 'subnet-test-3'}) 2026-04-02 01:21:26.491365 | orchestrator | 2026-04-02 01:21:26.491371 | orchestrator | PLAY [Manage test instances and volumes] *************************************** 2026-04-02 01:21:26.491378 | orchestrator | 2026-04-02 01:21:26.491384 | orchestrator | TASK [Get test server group] *************************************************** 2026-04-02 01:21:26.491404 | orchestrator | Thursday 02 April 2026 01:20:18 +0000 (0:00:29.171) 0:02:00.154 ******** 2026-04-02 01:21:26.491423 | orchestrator | ok: [localhost] 2026-04-02 01:21:26.491430 | orchestrator | 2026-04-02 01:21:26.491437 | orchestrator | TASK [Detach test volume] ****************************************************** 2026-04-02 01:21:26.491442 | orchestrator | Thursday 02 April 2026 01:20:22 +0000 (0:00:03.662) 0:02:03.816 ******** 2026-04-02 01:21:26.491455 | orchestrator | skipping: [localhost] 2026-04-02 01:21:26.491459 | orchestrator | 2026-04-02 01:21:26.491463 | orchestrator | TASK [Delete test volume] ****************************************************** 2026-04-02 01:21:26.491468 | orchestrator | Thursday 02 April 2026 01:20:22 +0000 (0:00:00.051) 0:02:03.868 ******** 2026-04-02 01:21:26.491477 | orchestrator | skipping: [localhost] 2026-04-02 01:21:26.491482 | orchestrator | 2026-04-02 01:21:26.491486 | orchestrator | TASK [Delete test instances] *************************************************** 2026-04-02 01:21:26.491490 | orchestrator | Thursday 02 April 2026 01:20:22 +0000 (0:00:00.054) 0:02:03.922 ******** 2026-04-02 01:21:26.491495 | orchestrator | skipping: [localhost] => (item={'name': 'test-4', 'network': 'test-3'})  2026-04-02 01:21:26.491499 | orchestrator | skipping: [localhost] => (item={'name': 'test-3', 'network': 'test-2'})  2026-04-02 01:21:26.491504 | orchestrator | skipping: [localhost] => (item={'name': 'test-2', 'network': 'test-2'})  2026-04-02 01:21:26.491508 | orchestrator | skipping: [localhost] => (item={'name': 'test-1', 'network': 'test-1'})  2026-04-02 01:21:26.491512 | orchestrator | skipping: [localhost] => (item={'name': 'test', 'network': 'test-1'})  2026-04-02 01:21:26.491516 | orchestrator | skipping: [localhost] 2026-04-02 01:21:26.491521 | orchestrator | 2026-04-02 01:21:26.491525 | orchestrator | TASK [Wait for instance deletion to complete] ********************************** 2026-04-02 01:21:26.491530 | orchestrator | Thursday 02 April 2026 01:20:22 +0000 (0:00:00.150) 0:02:04.073 ******** 2026-04-02 01:21:26.491534 | orchestrator | skipping: [localhost] 2026-04-02 01:21:26.491539 | orchestrator | 2026-04-02 01:21:26.491543 | orchestrator | TASK [Create test instances] *************************************************** 2026-04-02 01:21:26.491547 | orchestrator | Thursday 02 April 2026 01:20:22 +0000 (0:00:00.149) 0:02:04.223 ******** 2026-04-02 01:21:26.491552 | orchestrator | changed: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-02 01:21:26.491556 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-02 01:21:26.491561 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-02 01:21:26.491567 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-02 01:21:26.491572 | orchestrator | changed: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-02 01:21:26.491576 | orchestrator | 2026-04-02 01:21:26.491580 | orchestrator | TASK [Wait for instance creation to complete] ********************************** 2026-04-02 01:21:26.491585 | orchestrator | Thursday 02 April 2026 01:20:27 +0000 (0:00:04.737) 0:02:08.960 ******** 2026-04-02 01:21:26.491589 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (60 retries left). 2026-04-02 01:21:26.491594 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (59 retries left). 2026-04-02 01:21:26.491599 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (58 retries left). 2026-04-02 01:21:26.491603 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (57 retries left). 2026-04-02 01:21:26.491608 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (56 retries left). 2026-04-02 01:21:26.491613 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j816710228461.2738', 'results_file': '/ansible/.ansible_async/j816710228461.2738', 'changed': True, 'item': {'name': 'test', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-02 01:21:26.491620 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j67975826772.2763', 'results_file': '/ansible/.ansible_async/j67975826772.2763', 'changed': True, 'item': {'name': 'test-1', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-02 01:21:26.491624 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j54123206685.2795', 'results_file': '/ansible/.ansible_async/j54123206685.2795', 'changed': True, 'item': {'name': 'test-2', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-02 01:21:26.491629 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j353907807979.2820', 'results_file': '/ansible/.ansible_async/j353907807979.2820', 'changed': True, 'item': {'name': 'test-3', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-02 01:21:26.491636 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j435853968246.2845', 'results_file': '/ansible/.ansible_async/j435853968246.2845', 'changed': True, 'item': {'name': 'test-4', 'network': 'test-3'}, 'ansible_loop_var': 'item'}) 2026-04-02 01:21:26.491641 | orchestrator | 2026-04-02 01:21:26.491645 | orchestrator | TASK [Add metadata to instances] *********************************************** 2026-04-02 01:21:26.491650 | orchestrator | Thursday 02 April 2026 01:21:25 +0000 (0:00:57.789) 0:03:06.750 ******** 2026-04-02 01:21:26.491658 | orchestrator | changed: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-02 01:22:39.835322 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-02 01:22:39.835421 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-02 01:22:39.835434 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-02 01:22:39.835441 | orchestrator | changed: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-02 01:22:39.835448 | orchestrator | 2026-04-02 01:22:39.835456 | orchestrator | TASK [Wait for metadata to be added] ******************************************* 2026-04-02 01:22:39.835463 | orchestrator | Thursday 02 April 2026 01:21:30 +0000 (0:00:04.630) 0:03:11.381 ******** 2026-04-02 01:22:39.835470 | orchestrator | FAILED - RETRYING: [localhost]: Wait for metadata to be added (30 retries left). 2026-04-02 01:22:39.835480 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j682576569816.2955', 'results_file': '/ansible/.ansible_async/j682576569816.2955', 'changed': True, 'item': {'name': 'test', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-02 01:22:39.835490 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j452108952210.2980', 'results_file': '/ansible/.ansible_async/j452108952210.2980', 'changed': True, 'item': {'name': 'test-1', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-02 01:22:39.835497 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j326132982890.3005', 'results_file': '/ansible/.ansible_async/j326132982890.3005', 'changed': True, 'item': {'name': 'test-2', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-02 01:22:39.835521 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j456143540298.3030', 'results_file': '/ansible/.ansible_async/j456143540298.3030', 'changed': True, 'item': {'name': 'test-3', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-02 01:22:39.835529 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j60967278578.3055', 'results_file': '/ansible/.ansible_async/j60967278578.3055', 'changed': True, 'item': {'name': 'test-4', 'network': 'test-3'}, 'ansible_loop_var': 'item'}) 2026-04-02 01:22:39.835536 | orchestrator | 2026-04-02 01:22:39.835543 | orchestrator | TASK [Add tag to instances] **************************************************** 2026-04-02 01:22:39.835550 | orchestrator | Thursday 02 April 2026 01:21:39 +0000 (0:00:09.516) 0:03:20.898 ******** 2026-04-02 01:22:39.835556 | orchestrator | changed: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-02 01:22:39.835563 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-02 01:22:39.835569 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-02 01:22:39.835575 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-02 01:22:39.835582 | orchestrator | changed: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-02 01:22:39.835588 | orchestrator | 2026-04-02 01:22:39.835594 | orchestrator | TASK [Wait for tags to be added] *********************************************** 2026-04-02 01:22:39.835624 | orchestrator | Thursday 02 April 2026 01:21:44 +0000 (0:00:04.376) 0:03:25.275 ******** 2026-04-02 01:22:39.835631 | orchestrator | FAILED - RETRYING: [localhost]: Wait for tags to be added (30 retries left). 2026-04-02 01:22:39.835638 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j219583604348.3124', 'results_file': '/ansible/.ansible_async/j219583604348.3124', 'changed': True, 'item': {'name': 'test', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-02 01:22:39.835644 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j697141404869.3149', 'results_file': '/ansible/.ansible_async/j697141404869.3149', 'changed': True, 'item': {'name': 'test-1', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-02 01:22:39.835650 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j929074764834.3175', 'results_file': '/ansible/.ansible_async/j929074764834.3175', 'changed': True, 'item': {'name': 'test-2', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-02 01:22:39.835656 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j633384956660.3201', 'results_file': '/ansible/.ansible_async/j633384956660.3201', 'changed': True, 'item': {'name': 'test-3', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-02 01:22:39.835690 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j531929962893.3227', 'results_file': '/ansible/.ansible_async/j531929962893.3227', 'changed': True, 'item': {'name': 'test-4', 'network': 'test-3'}, 'ansible_loop_var': 'item'}) 2026-04-02 01:22:39.835697 | orchestrator | 2026-04-02 01:22:39.835703 | orchestrator | TASK [Create test volume] ****************************************************** 2026-04-02 01:22:39.835710 | orchestrator | Thursday 02 April 2026 01:21:53 +0000 (0:00:09.929) 0:03:35.204 ******** 2026-04-02 01:22:39.835716 | orchestrator | changed: [localhost] 2026-04-02 01:22:39.835724 | orchestrator | 2026-04-02 01:22:39.835731 | orchestrator | TASK [Attach test volume] ****************************************************** 2026-04-02 01:22:39.835735 | orchestrator | Thursday 02 April 2026 01:22:01 +0000 (0:00:07.437) 0:03:42.641 ******** 2026-04-02 01:22:39.835739 | orchestrator | changed: [localhost] 2026-04-02 01:22:39.835743 | orchestrator | 2026-04-02 01:22:39.835746 | orchestrator | TASK [Create floating ip addresses] ******************************************** 2026-04-02 01:22:39.835750 | orchestrator | Thursday 02 April 2026 01:22:15 +0000 (0:00:13.681) 0:03:56.323 ******** 2026-04-02 01:22:39.835755 | orchestrator | ok: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-02 01:22:39.835759 | orchestrator | ok: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-02 01:22:39.835763 | orchestrator | ok: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-02 01:22:39.835767 | orchestrator | ok: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-02 01:22:39.835770 | orchestrator | ok: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-02 01:22:39.835774 | orchestrator | 2026-04-02 01:22:39.835778 | orchestrator | TASK [Print floating ip addresses] ********************************************* 2026-04-02 01:22:39.835782 | orchestrator | Thursday 02 April 2026 01:22:39 +0000 (0:00:24.473) 0:04:20.797 ******** 2026-04-02 01:22:39.835786 | orchestrator | ok: [localhost] => (item=test) => { 2026-04-02 01:22:39.835790 | orchestrator |  "msg": "test: 192.168.112.156" 2026-04-02 01:22:39.835793 | orchestrator | } 2026-04-02 01:22:39.835797 | orchestrator | ok: [localhost] => (item=test-1) => { 2026-04-02 01:22:39.835802 | orchestrator |  "msg": "test-1: 192.168.112.100" 2026-04-02 01:22:39.835805 | orchestrator | } 2026-04-02 01:22:39.835809 | orchestrator | ok: [localhost] => (item=test-2) => { 2026-04-02 01:22:39.835813 | orchestrator |  "msg": "test-2: 192.168.112.103" 2026-04-02 01:22:39.835817 | orchestrator | } 2026-04-02 01:22:39.835820 | orchestrator | ok: [localhost] => (item=test-3) => { 2026-04-02 01:22:39.835829 | orchestrator |  "msg": "test-3: 192.168.112.133" 2026-04-02 01:22:39.835833 | orchestrator | } 2026-04-02 01:22:39.835841 | orchestrator | ok: [localhost] => (item=test-4) => { 2026-04-02 01:22:39.835845 | orchestrator |  "msg": "test-4: 192.168.112.116" 2026-04-02 01:22:39.835849 | orchestrator | } 2026-04-02 01:22:39.835853 | orchestrator | 2026-04-02 01:22:39.835857 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-02 01:22:39.835862 | orchestrator | localhost : ok=26  changed=23  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-02 01:22:39.835868 | orchestrator | 2026-04-02 01:22:39.835873 | orchestrator | 2026-04-02 01:22:39.835877 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-02 01:22:39.835882 | orchestrator | Thursday 02 April 2026 01:22:39 +0000 (0:00:00.113) 0:04:20.910 ******** 2026-04-02 01:22:39.835886 | orchestrator | =============================================================================== 2026-04-02 01:22:39.835891 | orchestrator | Wait for instance creation to complete --------------------------------- 57.79s 2026-04-02 01:22:39.835895 | orchestrator | Create test routers ---------------------------------------------------- 29.17s 2026-04-02 01:22:39.835899 | orchestrator | Create floating ip addresses ------------------------------------------- 24.47s 2026-04-02 01:22:39.835903 | orchestrator | Create test subnets ---------------------------------------------------- 17.04s 2026-04-02 01:22:39.835908 | orchestrator | Create test networks --------------------------------------------------- 14.27s 2026-04-02 01:22:39.835913 | orchestrator | Attach test volume ----------------------------------------------------- 13.68s 2026-04-02 01:22:39.835917 | orchestrator | Add member roles to user test ------------------------------------------ 11.22s 2026-04-02 01:22:39.835922 | orchestrator | Wait for tags to be added ----------------------------------------------- 9.93s 2026-04-02 01:22:39.835926 | orchestrator | Wait for metadata to be added ------------------------------------------- 9.52s 2026-04-02 01:22:39.835931 | orchestrator | Create test volume ------------------------------------------------------ 7.44s 2026-04-02 01:22:39.835935 | orchestrator | Add manager role to user test-admin ------------------------------------- 6.61s 2026-04-02 01:22:39.835939 | orchestrator | Create test instances --------------------------------------------------- 4.74s 2026-04-02 01:22:39.835944 | orchestrator | Create ssh security group ----------------------------------------------- 4.70s 2026-04-02 01:22:39.835948 | orchestrator | Add metadata to instances ----------------------------------------------- 4.63s 2026-04-02 01:22:39.835953 | orchestrator | Add tag to instances ---------------------------------------------------- 4.38s 2026-04-02 01:22:39.835957 | orchestrator | Create test-admin user -------------------------------------------------- 4.37s 2026-04-02 01:22:39.835962 | orchestrator | Create test server group ------------------------------------------------ 4.37s 2026-04-02 01:22:39.835966 | orchestrator | Create test project ----------------------------------------------------- 4.21s 2026-04-02 01:22:39.835971 | orchestrator | Create test keypair ----------------------------------------------------- 4.20s 2026-04-02 01:22:39.835976 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.20s 2026-04-02 01:22:40.016381 | orchestrator | + server_list 2026-04-02 01:22:40.016467 | orchestrator | + openstack --os-cloud test server list 2026-04-02 01:22:43.533971 | orchestrator | +--------------------------------------+--------+--------+-----------------------------------------+--------------------------+----------+ 2026-04-02 01:22:43.534096 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2026-04-02 01:22:43.534108 | orchestrator | +--------------------------------------+--------+--------+-----------------------------------------+--------------------------+----------+ 2026-04-02 01:22:43.534115 | orchestrator | | 56bf57a4-c73a-4cc8-945f-232b8bc97053 | test-3 | ACTIVE | test-2=192.168.112.133, 192.168.201.212 | N/A (booted from volume) | SCS-1L-1 | 2026-04-02 01:22:43.534122 | orchestrator | | fb878a84-596b-4ef7-b613-574f611a21fc | test-4 | ACTIVE | test-3=192.168.112.116, 192.168.202.53 | N/A (booted from volume) | SCS-1L-1 | 2026-04-02 01:22:43.534147 | orchestrator | | 22de0cc0-b7b9-4790-b66c-ba42f09c1396 | test-2 | ACTIVE | test-2=192.168.112.103, 192.168.201.23 | N/A (booted from volume) | SCS-1L-1 | 2026-04-02 01:22:43.534165 | orchestrator | | 3b511730-8d13-4182-94db-a3f11019d51a | test | ACTIVE | test-1=192.168.112.156, 192.168.200.216 | N/A (booted from volume) | SCS-1L-1 | 2026-04-02 01:22:43.534175 | orchestrator | | 65c91abc-d5a1-4fe7-8560-c0a229536185 | test-1 | ACTIVE | test-1=192.168.112.100, 192.168.200.229 | N/A (booted from volume) | SCS-1L-1 | 2026-04-02 01:22:43.534185 | orchestrator | +--------------------------------------+--------+--------+-----------------------------------------+--------------------------+----------+ 2026-04-02 01:22:43.783936 | orchestrator | + openstack --os-cloud test server show test 2026-04-02 01:22:46.931459 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-02 01:22:46.931543 | orchestrator | | Field | Value | 2026-04-02 01:22:46.931550 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-02 01:22:46.931555 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-02 01:22:46.931560 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-02 01:22:46.931565 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-02 01:22:46.931569 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2026-04-02 01:22:46.931573 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-02 01:22:46.931590 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-02 01:22:46.931603 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-02 01:22:46.931608 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-02 01:22:46.931612 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-02 01:22:46.931616 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-02 01:22:46.931620 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-02 01:22:46.931624 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-02 01:22:46.931628 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-02 01:22:46.931632 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-02 01:22:46.931642 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-02 01:22:46.931646 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-02T01:21:00.000000 | 2026-04-02 01:22:46.931653 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-02 01:22:46.931657 | orchestrator | | accessIPv4 | | 2026-04-02 01:22:46.931663 | orchestrator | | accessIPv6 | | 2026-04-02 01:22:46.931667 | orchestrator | | addresses | test-1=192.168.112.156, 192.168.200.216 | 2026-04-02 01:22:46.931671 | orchestrator | | config_drive | | 2026-04-02 01:22:46.931675 | orchestrator | | created | 2026-04-02T01:20:32Z | 2026-04-02 01:22:46.931679 | orchestrator | | description | None | 2026-04-02 01:22:46.931686 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='True', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-02 01:22:46.931690 | orchestrator | | hostId | bae7c3cf5e92993b192099614d0d49a8f2669d7c9e72be40ea933911 | 2026-04-02 01:22:46.931694 | orchestrator | | host_status | None | 2026-04-02 01:22:46.931701 | orchestrator | | id | 3b511730-8d13-4182-94db-a3f11019d51a | 2026-04-02 01:22:46.931705 | orchestrator | | image | N/A (booted from volume) | 2026-04-02 01:22:46.931711 | orchestrator | | key_name | test | 2026-04-02 01:22:46.931715 | orchestrator | | locked | False | 2026-04-02 01:22:46.931719 | orchestrator | | locked_reason | None | 2026-04-02 01:22:46.931723 | orchestrator | | name | test | 2026-04-02 01:22:46.931727 | orchestrator | | pinned_availability_zone | None | 2026-04-02 01:22:46.931733 | orchestrator | | progress | 0 | 2026-04-02 01:22:46.931737 | orchestrator | | project_id | 560a082c675b46848d69b3bcf223d04e | 2026-04-02 01:22:46.931741 | orchestrator | | properties | hostname='test' | 2026-04-02 01:22:46.931748 | orchestrator | | security_groups | name='icmp' | 2026-04-02 01:22:46.931755 | orchestrator | | | name='ssh' | 2026-04-02 01:22:46.931759 | orchestrator | | server_groups | None | 2026-04-02 01:22:46.931762 | orchestrator | | status | ACTIVE | 2026-04-02 01:22:46.931766 | orchestrator | | tags | test | 2026-04-02 01:22:46.931770 | orchestrator | | trusted_image_certificates | None | 2026-04-02 01:22:46.931781 | orchestrator | | updated | 2026-04-02T01:21:31Z | 2026-04-02 01:22:46.931785 | orchestrator | | user_id | d8eb031d6fb54a3e8dfcbca23dd14a56 | 2026-04-02 01:22:46.931789 | orchestrator | | volumes_attached | delete_on_termination='True', id='4ed8fe2b-3eb0-46c6-8141-58b27643f7ae' | 2026-04-02 01:22:46.931793 | orchestrator | | | delete_on_termination='False', id='d6d78a4b-b7e7-4351-9573-571fdba4aad2' | 2026-04-02 01:22:46.934735 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-02 01:22:47.181629 | orchestrator | + openstack --os-cloud test server show test-1 2026-04-02 01:22:50.296860 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-02 01:22:50.296950 | orchestrator | | Field | Value | 2026-04-02 01:22:50.296961 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-02 01:22:50.296969 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-02 01:22:50.296997 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-02 01:22:50.297004 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-02 01:22:50.297010 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2026-04-02 01:22:50.297017 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-02 01:22:50.297023 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-02 01:22:50.297044 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-02 01:22:50.297057 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-02 01:22:50.297064 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-02 01:22:50.297070 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-02 01:22:50.297117 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-02 01:22:50.297125 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-02 01:22:50.297132 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-02 01:22:50.297163 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-02 01:22:50.297171 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-02 01:22:50.297178 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-02T01:20:59.000000 | 2026-04-02 01:22:50.297193 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-02 01:22:50.297200 | orchestrator | | accessIPv4 | | 2026-04-02 01:22:50.297207 | orchestrator | | accessIPv6 | | 2026-04-02 01:22:50.297220 | orchestrator | | addresses | test-1=192.168.112.100, 192.168.200.229 | 2026-04-02 01:22:50.297234 | orchestrator | | config_drive | | 2026-04-02 01:22:50.297241 | orchestrator | | created | 2026-04-02T01:20:32Z | 2026-04-02 01:22:50.297247 | orchestrator | | description | None | 2026-04-02 01:22:50.297253 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='True', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-02 01:22:50.297259 | orchestrator | | hostId | b34c2c86571014463797f5c826e4e3a9fa7543872d52b608f3dc5371 | 2026-04-02 01:22:50.297266 | orchestrator | | host_status | None | 2026-04-02 01:22:50.297278 | orchestrator | | id | 65c91abc-d5a1-4fe7-8560-c0a229536185 | 2026-04-02 01:22:50.297288 | orchestrator | | image | N/A (booted from volume) | 2026-04-02 01:22:50.297295 | orchestrator | | key_name | test | 2026-04-02 01:22:50.297308 | orchestrator | | locked | False | 2026-04-02 01:22:50.297314 | orchestrator | | locked_reason | None | 2026-04-02 01:22:50.297320 | orchestrator | | name | test-1 | 2026-04-02 01:22:50.297326 | orchestrator | | pinned_availability_zone | None | 2026-04-02 01:22:50.297333 | orchestrator | | progress | 0 | 2026-04-02 01:22:50.297340 | orchestrator | | project_id | 560a082c675b46848d69b3bcf223d04e | 2026-04-02 01:22:50.297347 | orchestrator | | properties | hostname='test-1' | 2026-04-02 01:22:50.297358 | orchestrator | | security_groups | name='icmp' | 2026-04-02 01:22:50.297368 | orchestrator | | | name='ssh' | 2026-04-02 01:22:50.297385 | orchestrator | | server_groups | None | 2026-04-02 01:22:50.297392 | orchestrator | | status | ACTIVE | 2026-04-02 01:22:50.297398 | orchestrator | | tags | test | 2026-04-02 01:22:50.297405 | orchestrator | | trusted_image_certificates | None | 2026-04-02 01:22:50.297411 | orchestrator | | updated | 2026-04-02T01:21:31Z | 2026-04-02 01:22:50.297418 | orchestrator | | user_id | d8eb031d6fb54a3e8dfcbca23dd14a56 | 2026-04-02 01:22:50.297424 | orchestrator | | volumes_attached | delete_on_termination='True', id='9c395f1e-246c-4283-b16b-640519f04272' | 2026-04-02 01:22:50.302208 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-02 01:22:50.576855 | orchestrator | + openstack --os-cloud test server show test-2 2026-04-02 01:22:53.279048 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-02 01:22:53.279163 | orchestrator | | Field | Value | 2026-04-02 01:22:53.279173 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-02 01:22:53.279180 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-02 01:22:53.279186 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-02 01:22:53.279191 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-02 01:22:53.279198 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2026-04-02 01:22:53.279203 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-02 01:22:53.279209 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-02 01:22:53.279225 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-02 01:22:53.279234 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-02 01:22:53.279243 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-02 01:22:53.279249 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-02 01:22:53.279254 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-02 01:22:53.279260 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-02 01:22:53.279267 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-02 01:22:53.279273 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-02 01:22:53.279279 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-02 01:22:53.279286 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-02T01:21:00.000000 | 2026-04-02 01:22:53.279302 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-02 01:22:53.279310 | orchestrator | | accessIPv4 | | 2026-04-02 01:22:53.279316 | orchestrator | | accessIPv6 | | 2026-04-02 01:22:53.279322 | orchestrator | | addresses | test-2=192.168.112.103, 192.168.201.23 | 2026-04-02 01:22:53.279327 | orchestrator | | config_drive | | 2026-04-02 01:22:53.279333 | orchestrator | | created | 2026-04-02T01:20:32Z | 2026-04-02 01:22:53.279339 | orchestrator | | description | None | 2026-04-02 01:22:53.279345 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='True', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-02 01:22:53.279351 | orchestrator | | hostId | b34c2c86571014463797f5c826e4e3a9fa7543872d52b608f3dc5371 | 2026-04-02 01:22:53.279356 | orchestrator | | host_status | None | 2026-04-02 01:22:53.279369 | orchestrator | | id | 22de0cc0-b7b9-4790-b66c-ba42f09c1396 | 2026-04-02 01:22:53.279375 | orchestrator | | image | N/A (booted from volume) | 2026-04-02 01:22:53.279381 | orchestrator | | key_name | test | 2026-04-02 01:22:53.279387 | orchestrator | | locked | False | 2026-04-02 01:22:53.279393 | orchestrator | | locked_reason | None | 2026-04-02 01:22:53.279399 | orchestrator | | name | test-2 | 2026-04-02 01:22:53.279405 | orchestrator | | pinned_availability_zone | None | 2026-04-02 01:22:53.279411 | orchestrator | | progress | 0 | 2026-04-02 01:22:53.279420 | orchestrator | | project_id | 560a082c675b46848d69b3bcf223d04e | 2026-04-02 01:22:53.279431 | orchestrator | | properties | hostname='test-2' | 2026-04-02 01:22:53.279441 | orchestrator | | security_groups | name='icmp' | 2026-04-02 01:22:53.279450 | orchestrator | | | name='ssh' | 2026-04-02 01:22:53.279457 | orchestrator | | server_groups | None | 2026-04-02 01:22:53.279463 | orchestrator | | status | ACTIVE | 2026-04-02 01:22:53.279469 | orchestrator | | tags | test | 2026-04-02 01:22:53.279476 | orchestrator | | trusted_image_certificates | None | 2026-04-02 01:22:53.279482 | orchestrator | | updated | 2026-04-02T01:21:32Z | 2026-04-02 01:22:53.279488 | orchestrator | | user_id | d8eb031d6fb54a3e8dfcbca23dd14a56 | 2026-04-02 01:22:53.279498 | orchestrator | | volumes_attached | delete_on_termination='True', id='5b82fa9f-0fa5-44f6-987d-9a23cb1d0532' | 2026-04-02 01:22:53.284295 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-02 01:22:53.529625 | orchestrator | + openstack --os-cloud test server show test-3 2026-04-02 01:22:56.442588 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-02 01:22:56.442678 | orchestrator | | Field | Value | 2026-04-02 01:22:56.442686 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-02 01:22:56.442690 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-02 01:22:56.442694 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-02 01:22:56.442698 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-02 01:22:56.442702 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2026-04-02 01:22:56.442718 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-02 01:22:56.442722 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-02 01:22:56.442737 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-02 01:22:56.442741 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-02 01:22:56.442748 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-02 01:22:56.442752 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-02 01:22:56.442756 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-02 01:22:56.442760 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-02 01:22:56.442764 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-02 01:22:56.442768 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-02 01:22:56.442779 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-02 01:22:56.442783 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-02T01:21:02.000000 | 2026-04-02 01:22:56.442793 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-02 01:22:56.442800 | orchestrator | | accessIPv4 | | 2026-04-02 01:22:56.442809 | orchestrator | | accessIPv6 | | 2026-04-02 01:22:56.442821 | orchestrator | | addresses | test-2=192.168.112.133, 192.168.201.212 | 2026-04-02 01:22:56.442827 | orchestrator | | config_drive | | 2026-04-02 01:22:56.442833 | orchestrator | | created | 2026-04-02T01:20:36Z | 2026-04-02 01:22:56.442839 | orchestrator | | description | None | 2026-04-02 01:22:56.442850 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='True', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-02 01:22:56.442856 | orchestrator | | hostId | b34c2c86571014463797f5c826e4e3a9fa7543872d52b608f3dc5371 | 2026-04-02 01:22:56.442862 | orchestrator | | host_status | None | 2026-04-02 01:22:56.442873 | orchestrator | | id | 56bf57a4-c73a-4cc8-945f-232b8bc97053 | 2026-04-02 01:22:56.442880 | orchestrator | | image | N/A (booted from volume) | 2026-04-02 01:22:56.442890 | orchestrator | | key_name | test | 2026-04-02 01:22:56.442894 | orchestrator | | locked | False | 2026-04-02 01:22:56.442898 | orchestrator | | locked_reason | None | 2026-04-02 01:22:56.442902 | orchestrator | | name | test-3 | 2026-04-02 01:22:56.442910 | orchestrator | | pinned_availability_zone | None | 2026-04-02 01:22:56.442914 | orchestrator | | progress | 0 | 2026-04-02 01:22:56.442918 | orchestrator | | project_id | 560a082c675b46848d69b3bcf223d04e | 2026-04-02 01:22:56.442921 | orchestrator | | properties | hostname='test-3' | 2026-04-02 01:22:56.442929 | orchestrator | | security_groups | name='icmp' | 2026-04-02 01:22:56.442933 | orchestrator | | | name='ssh' | 2026-04-02 01:22:56.442940 | orchestrator | | server_groups | None | 2026-04-02 01:22:56.442944 | orchestrator | | status | ACTIVE | 2026-04-02 01:22:56.442948 | orchestrator | | tags | test | 2026-04-02 01:22:56.442955 | orchestrator | | trusted_image_certificates | None | 2026-04-02 01:22:56.442959 | orchestrator | | updated | 2026-04-02T01:21:33Z | 2026-04-02 01:22:56.442963 | orchestrator | | user_id | d8eb031d6fb54a3e8dfcbca23dd14a56 | 2026-04-02 01:22:56.442967 | orchestrator | | volumes_attached | delete_on_termination='True', id='c5dc67f5-58cb-4327-88fb-8df6091e7982' | 2026-04-02 01:22:56.447780 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-02 01:22:56.712179 | orchestrator | + openstack --os-cloud test server show test-4 2026-04-02 01:22:59.578126 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-02 01:22:59.578224 | orchestrator | | Field | Value | 2026-04-02 01:22:59.578233 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-02 01:22:59.578239 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-02 01:22:59.578257 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-02 01:22:59.578263 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-02 01:22:59.578268 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2026-04-02 01:22:59.578274 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-02 01:22:59.578280 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-02 01:22:59.578294 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-02 01:22:59.578511 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-02 01:22:59.578519 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-02 01:22:59.578525 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-02 01:22:59.578530 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-02 01:22:59.578540 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-02 01:22:59.578546 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-02 01:22:59.578552 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-02 01:22:59.578557 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-02 01:22:59.578565 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-02T01:21:00.000000 | 2026-04-02 01:22:59.578576 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-02 01:22:59.578582 | orchestrator | | accessIPv4 | | 2026-04-02 01:22:59.578587 | orchestrator | | accessIPv6 | | 2026-04-02 01:22:59.578593 | orchestrator | | addresses | test-3=192.168.112.116, 192.168.202.53 | 2026-04-02 01:22:59.578602 | orchestrator | | config_drive | | 2026-04-02 01:22:59.578608 | orchestrator | | created | 2026-04-02T01:20:34Z | 2026-04-02 01:22:59.578614 | orchestrator | | description | None | 2026-04-02 01:22:59.578619 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='True', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-02 01:22:59.578625 | orchestrator | | hostId | bae7c3cf5e92993b192099614d0d49a8f2669d7c9e72be40ea933911 | 2026-04-02 01:22:59.578633 | orchestrator | | host_status | None | 2026-04-02 01:22:59.578642 | orchestrator | | id | fb878a84-596b-4ef7-b613-574f611a21fc | 2026-04-02 01:22:59.578648 | orchestrator | | image | N/A (booted from volume) | 2026-04-02 01:22:59.578654 | orchestrator | | key_name | test | 2026-04-02 01:22:59.578662 | orchestrator | | locked | False | 2026-04-02 01:22:59.578668 | orchestrator | | locked_reason | None | 2026-04-02 01:22:59.578674 | orchestrator | | name | test-4 | 2026-04-02 01:22:59.578679 | orchestrator | | pinned_availability_zone | None | 2026-04-02 01:22:59.578685 | orchestrator | | progress | 0 | 2026-04-02 01:22:59.578691 | orchestrator | | project_id | 560a082c675b46848d69b3bcf223d04e | 2026-04-02 01:22:59.578698 | orchestrator | | properties | hostname='test-4' | 2026-04-02 01:22:59.578708 | orchestrator | | security_groups | name='icmp' | 2026-04-02 01:22:59.578715 | orchestrator | | | name='ssh' | 2026-04-02 01:22:59.578725 | orchestrator | | server_groups | None | 2026-04-02 01:22:59.578732 | orchestrator | | status | ACTIVE | 2026-04-02 01:22:59.578738 | orchestrator | | tags | test | 2026-04-02 01:22:59.578745 | orchestrator | | trusted_image_certificates | None | 2026-04-02 01:22:59.578751 | orchestrator | | updated | 2026-04-02T01:21:33Z | 2026-04-02 01:22:59.578758 | orchestrator | | user_id | d8eb031d6fb54a3e8dfcbca23dd14a56 | 2026-04-02 01:22:59.578764 | orchestrator | | volumes_attached | delete_on_termination='True', id='4ed9f8a8-8f33-413e-af9c-a8b4e6d1305d' | 2026-04-02 01:22:59.581930 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-02 01:22:59.831619 | orchestrator | + server_ping 2026-04-02 01:22:59.832950 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-04-02 01:22:59.833225 | orchestrator | ++ tr -d '\r' 2026-04-02 01:23:02.586483 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-02 01:23:02.586556 | orchestrator | + ping -c3 192.168.112.100 2026-04-02 01:23:02.599275 | orchestrator | PING 192.168.112.100 (192.168.112.100) 56(84) bytes of data. 2026-04-02 01:23:02.599369 | orchestrator | 64 bytes from 192.168.112.100: icmp_seq=1 ttl=63 time=6.69 ms 2026-04-02 01:23:03.596624 | orchestrator | 64 bytes from 192.168.112.100: icmp_seq=2 ttl=63 time=2.23 ms 2026-04-02 01:23:04.597941 | orchestrator | 64 bytes from 192.168.112.100: icmp_seq=3 ttl=63 time=1.58 ms 2026-04-02 01:23:04.598066 | orchestrator | 2026-04-02 01:23:04.598078 | orchestrator | --- 192.168.112.100 ping statistics --- 2026-04-02 01:23:04.598136 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-02 01:23:04.598145 | orchestrator | rtt min/avg/max/mdev = 1.583/3.500/6.687/2.268 ms 2026-04-02 01:23:04.598808 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-02 01:23:04.598857 | orchestrator | + ping -c3 192.168.112.116 2026-04-02 01:23:04.608938 | orchestrator | PING 192.168.112.116 (192.168.112.116) 56(84) bytes of data. 2026-04-02 01:23:04.609023 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=1 ttl=63 time=6.06 ms 2026-04-02 01:23:05.606279 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=2 ttl=63 time=1.96 ms 2026-04-02 01:23:06.606458 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=3 ttl=63 time=1.35 ms 2026-04-02 01:23:06.606530 | orchestrator | 2026-04-02 01:23:06.606537 | orchestrator | --- 192.168.112.116 ping statistics --- 2026-04-02 01:23:06.606543 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-04-02 01:23:06.606548 | orchestrator | rtt min/avg/max/mdev = 1.352/3.123/6.057/2.089 ms 2026-04-02 01:23:06.607868 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-02 01:23:06.607935 | orchestrator | + ping -c3 192.168.112.103 2026-04-02 01:23:06.620897 | orchestrator | PING 192.168.112.103 (192.168.112.103) 56(84) bytes of data. 2026-04-02 01:23:06.620967 | orchestrator | 64 bytes from 192.168.112.103: icmp_seq=1 ttl=63 time=9.52 ms 2026-04-02 01:23:07.615509 | orchestrator | 64 bytes from 192.168.112.103: icmp_seq=2 ttl=63 time=2.18 ms 2026-04-02 01:23:08.616860 | orchestrator | 64 bytes from 192.168.112.103: icmp_seq=3 ttl=63 time=1.66 ms 2026-04-02 01:23:08.616966 | orchestrator | 2026-04-02 01:23:08.616975 | orchestrator | --- 192.168.112.103 ping statistics --- 2026-04-02 01:23:08.616981 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-02 01:23:08.616986 | orchestrator | rtt min/avg/max/mdev = 1.661/4.451/9.519/3.589 ms 2026-04-02 01:23:08.616991 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-02 01:23:08.616996 | orchestrator | + ping -c3 192.168.112.133 2026-04-02 01:23:08.628852 | orchestrator | PING 192.168.112.133 (192.168.112.133) 56(84) bytes of data. 2026-04-02 01:23:08.628947 | orchestrator | 64 bytes from 192.168.112.133: icmp_seq=1 ttl=63 time=8.30 ms 2026-04-02 01:23:09.624338 | orchestrator | 64 bytes from 192.168.112.133: icmp_seq=2 ttl=63 time=2.37 ms 2026-04-02 01:23:10.624777 | orchestrator | 64 bytes from 192.168.112.133: icmp_seq=3 ttl=63 time=1.08 ms 2026-04-02 01:23:10.624944 | orchestrator | 2026-04-02 01:23:10.624959 | orchestrator | --- 192.168.112.133 ping statistics --- 2026-04-02 01:23:10.624967 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-02 01:23:10.624974 | orchestrator | rtt min/avg/max/mdev = 1.081/3.918/8.301/3.143 ms 2026-04-02 01:23:10.624990 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-02 01:23:10.624997 | orchestrator | + ping -c3 192.168.112.156 2026-04-02 01:23:10.634119 | orchestrator | PING 192.168.112.156 (192.168.112.156) 56(84) bytes of data. 2026-04-02 01:23:10.634180 | orchestrator | 64 bytes from 192.168.112.156: icmp_seq=1 ttl=63 time=3.65 ms 2026-04-02 01:23:11.632532 | orchestrator | 64 bytes from 192.168.112.156: icmp_seq=2 ttl=63 time=1.40 ms 2026-04-02 01:23:12.634495 | orchestrator | 64 bytes from 192.168.112.156: icmp_seq=3 ttl=63 time=1.52 ms 2026-04-02 01:23:12.634574 | orchestrator | 2026-04-02 01:23:12.634588 | orchestrator | --- 192.168.112.156 ping statistics --- 2026-04-02 01:23:12.634598 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-02 01:23:12.634607 | orchestrator | rtt min/avg/max/mdev = 1.404/2.190/3.645/1.029 ms 2026-04-02 01:23:12.635352 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-02 01:23:12.635381 | orchestrator | + compute_list 2026-04-02 01:23:12.635393 | orchestrator | + osism manage compute list testbed-node-3 2026-04-02 01:23:14.178619 | orchestrator | 2026-04-02 01:23:14 | ERROR  | Unable to get ansible vault password 2026-04-02 01:23:14.178692 | orchestrator | 2026-04-02 01:23:14 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-02 01:23:14.178706 | orchestrator | 2026-04-02 01:23:14 | ERROR  | Dropping encrypted entries 2026-04-02 01:23:17.843965 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-02 01:23:17.844099 | orchestrator | | ID | Name | Status | 2026-04-02 01:23:17.844122 | orchestrator | |--------------------------------------+--------+----------| 2026-04-02 01:23:17.844141 | orchestrator | | 56bf57a4-c73a-4cc8-945f-232b8bc97053 | test-3 | ACTIVE | 2026-04-02 01:23:17.844156 | orchestrator | | 22de0cc0-b7b9-4790-b66c-ba42f09c1396 | test-2 | ACTIVE | 2026-04-02 01:23:17.844171 | orchestrator | | 65c91abc-d5a1-4fe7-8560-c0a229536185 | test-1 | ACTIVE | 2026-04-02 01:23:17.844185 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-02 01:23:18.125962 | orchestrator | + osism manage compute list testbed-node-4 2026-04-02 01:23:19.693565 | orchestrator | 2026-04-02 01:23:19 | ERROR  | Unable to get ansible vault password 2026-04-02 01:23:19.693637 | orchestrator | 2026-04-02 01:23:19 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-02 01:23:19.693644 | orchestrator | 2026-04-02 01:23:19 | ERROR  | Dropping encrypted entries 2026-04-02 01:23:20.873241 | orchestrator | +------+--------+----------+ 2026-04-02 01:23:20.873324 | orchestrator | | ID | Name | Status | 2026-04-02 01:23:20.873331 | orchestrator | |------+--------+----------| 2026-04-02 01:23:20.873336 | orchestrator | +------+--------+----------+ 2026-04-02 01:23:21.160731 | orchestrator | + osism manage compute list testbed-node-5 2026-04-02 01:23:22.677813 | orchestrator | 2026-04-02 01:23:22 | ERROR  | Unable to get ansible vault password 2026-04-02 01:23:22.677889 | orchestrator | 2026-04-02 01:23:22 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-02 01:23:22.677906 | orchestrator | 2026-04-02 01:23:22 | ERROR  | Dropping encrypted entries 2026-04-02 01:23:24.433974 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-02 01:23:24.434105 | orchestrator | | ID | Name | Status | 2026-04-02 01:23:24.434116 | orchestrator | |--------------------------------------+--------+----------| 2026-04-02 01:23:24.434123 | orchestrator | | fb878a84-596b-4ef7-b613-574f611a21fc | test-4 | ACTIVE | 2026-04-02 01:23:24.434129 | orchestrator | | 3b511730-8d13-4182-94db-a3f11019d51a | test | ACTIVE | 2026-04-02 01:23:24.434146 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-02 01:23:24.715657 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-4 2026-04-02 01:23:26.324688 | orchestrator | 2026-04-02 01:23:26 | ERROR  | Unable to get ansible vault password 2026-04-02 01:23:26.324783 | orchestrator | 2026-04-02 01:23:26 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-02 01:23:26.324798 | orchestrator | 2026-04-02 01:23:26 | ERROR  | Dropping encrypted entries 2026-04-02 01:23:27.897415 | orchestrator | 2026-04-02 01:23:27 | INFO  | No migratable instances found on node testbed-node-4 2026-04-02 01:23:28.168947 | orchestrator | + compute_list 2026-04-02 01:23:28.169007 | orchestrator | + osism manage compute list testbed-node-3 2026-04-02 01:23:29.703872 | orchestrator | 2026-04-02 01:23:29 | ERROR  | Unable to get ansible vault password 2026-04-02 01:23:29.703927 | orchestrator | 2026-04-02 01:23:29 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-02 01:23:29.703934 | orchestrator | 2026-04-02 01:23:29 | ERROR  | Dropping encrypted entries 2026-04-02 01:23:31.226317 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-02 01:23:31.226430 | orchestrator | | ID | Name | Status | 2026-04-02 01:23:31.226437 | orchestrator | |--------------------------------------+--------+----------| 2026-04-02 01:23:31.226441 | orchestrator | | 56bf57a4-c73a-4cc8-945f-232b8bc97053 | test-3 | ACTIVE | 2026-04-02 01:23:31.226445 | orchestrator | | 22de0cc0-b7b9-4790-b66c-ba42f09c1396 | test-2 | ACTIVE | 2026-04-02 01:23:31.226449 | orchestrator | | 65c91abc-d5a1-4fe7-8560-c0a229536185 | test-1 | ACTIVE | 2026-04-02 01:23:31.226453 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-02 01:23:31.505132 | orchestrator | + osism manage compute list testbed-node-4 2026-04-02 01:23:33.059620 | orchestrator | 2026-04-02 01:23:33 | ERROR  | Unable to get ansible vault password 2026-04-02 01:23:33.060058 | orchestrator | 2026-04-02 01:23:33 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-02 01:23:33.060086 | orchestrator | 2026-04-02 01:23:33 | ERROR  | Dropping encrypted entries 2026-04-02 01:23:34.207480 | orchestrator | +------+--------+----------+ 2026-04-02 01:23:34.207579 | orchestrator | | ID | Name | Status | 2026-04-02 01:23:34.207588 | orchestrator | |------+--------+----------| 2026-04-02 01:23:34.207593 | orchestrator | +------+--------+----------+ 2026-04-02 01:23:34.496606 | orchestrator | + osism manage compute list testbed-node-5 2026-04-02 01:23:36.062932 | orchestrator | 2026-04-02 01:23:36 | ERROR  | Unable to get ansible vault password 2026-04-02 01:23:36.063012 | orchestrator | 2026-04-02 01:23:36 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-02 01:23:36.063021 | orchestrator | 2026-04-02 01:23:36 | ERROR  | Dropping encrypted entries 2026-04-02 01:23:38.030635 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-02 01:23:38.030698 | orchestrator | | ID | Name | Status | 2026-04-02 01:23:38.030709 | orchestrator | |--------------------------------------+--------+----------| 2026-04-02 01:23:38.030715 | orchestrator | | fb878a84-596b-4ef7-b613-574f611a21fc | test-4 | ACTIVE | 2026-04-02 01:23:38.030719 | orchestrator | | 3b511730-8d13-4182-94db-a3f11019d51a | test | ACTIVE | 2026-04-02 01:23:38.030723 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-02 01:23:38.309824 | orchestrator | + server_ping 2026-04-02 01:23:38.311109 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-04-02 01:23:38.311253 | orchestrator | ++ tr -d '\r' 2026-04-02 01:23:41.123178 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-02 01:23:41.123269 | orchestrator | + ping -c3 192.168.112.100 2026-04-02 01:23:41.134911 | orchestrator | PING 192.168.112.100 (192.168.112.100) 56(84) bytes of data. 2026-04-02 01:23:41.134988 | orchestrator | 64 bytes from 192.168.112.100: icmp_seq=1 ttl=63 time=7.13 ms 2026-04-02 01:23:42.132013 | orchestrator | 64 bytes from 192.168.112.100: icmp_seq=2 ttl=63 time=3.02 ms 2026-04-02 01:23:43.132117 | orchestrator | 64 bytes from 192.168.112.100: icmp_seq=3 ttl=63 time=1.35 ms 2026-04-02 01:23:43.132180 | orchestrator | 2026-04-02 01:23:43.132190 | orchestrator | --- 192.168.112.100 ping statistics --- 2026-04-02 01:23:43.132197 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-02 01:23:43.132203 | orchestrator | rtt min/avg/max/mdev = 1.352/3.834/7.132/2.428 ms 2026-04-02 01:23:43.132673 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-02 01:23:43.132688 | orchestrator | + ping -c3 192.168.112.116 2026-04-02 01:23:43.141148 | orchestrator | PING 192.168.112.116 (192.168.112.116) 56(84) bytes of data. 2026-04-02 01:23:43.141209 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=1 ttl=63 time=4.43 ms 2026-04-02 01:23:44.139884 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=2 ttl=63 time=1.34 ms 2026-04-02 01:23:45.141216 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=3 ttl=63 time=0.987 ms 2026-04-02 01:23:45.141759 | orchestrator | 2026-04-02 01:23:45.141794 | orchestrator | --- 192.168.112.116 ping statistics --- 2026-04-02 01:23:45.141803 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-02 01:23:45.141810 | orchestrator | rtt min/avg/max/mdev = 0.987/2.252/4.427/1.544 ms 2026-04-02 01:23:45.141816 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-02 01:23:45.141823 | orchestrator | + ping -c3 192.168.112.103 2026-04-02 01:23:45.148254 | orchestrator | PING 192.168.112.103 (192.168.112.103) 56(84) bytes of data. 2026-04-02 01:23:45.148315 | orchestrator | 64 bytes from 192.168.112.103: icmp_seq=1 ttl=63 time=4.20 ms 2026-04-02 01:23:46.146271 | orchestrator | 64 bytes from 192.168.112.103: icmp_seq=2 ttl=63 time=1.28 ms 2026-04-02 01:23:47.147967 | orchestrator | 64 bytes from 192.168.112.103: icmp_seq=3 ttl=63 time=1.11 ms 2026-04-02 01:23:47.148016 | orchestrator | 2026-04-02 01:23:47.148022 | orchestrator | --- 192.168.112.103 ping statistics --- 2026-04-02 01:23:47.148027 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-04-02 01:23:47.148031 | orchestrator | rtt min/avg/max/mdev = 1.107/2.194/4.196/1.416 ms 2026-04-02 01:23:47.148903 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-02 01:23:47.148942 | orchestrator | + ping -c3 192.168.112.133 2026-04-02 01:23:47.164697 | orchestrator | PING 192.168.112.133 (192.168.112.133) 56(84) bytes of data. 2026-04-02 01:23:47.164756 | orchestrator | 64 bytes from 192.168.112.133: icmp_seq=1 ttl=63 time=11.8 ms 2026-04-02 01:23:48.156524 | orchestrator | 64 bytes from 192.168.112.133: icmp_seq=2 ttl=63 time=1.43 ms 2026-04-02 01:23:49.158433 | orchestrator | 64 bytes from 192.168.112.133: icmp_seq=3 ttl=63 time=1.14 ms 2026-04-02 01:23:49.158496 | orchestrator | 2026-04-02 01:23:49.158506 | orchestrator | --- 192.168.112.133 ping statistics --- 2026-04-02 01:23:49.158514 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-04-02 01:23:49.158520 | orchestrator | rtt min/avg/max/mdev = 1.141/4.799/11.823/4.968 ms 2026-04-02 01:23:49.158540 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-02 01:23:49.158547 | orchestrator | + ping -c3 192.168.112.156 2026-04-02 01:23:49.166839 | orchestrator | PING 192.168.112.156 (192.168.112.156) 56(84) bytes of data. 2026-04-02 01:23:49.166909 | orchestrator | 64 bytes from 192.168.112.156: icmp_seq=1 ttl=63 time=3.80 ms 2026-04-02 01:23:50.166670 | orchestrator | 64 bytes from 192.168.112.156: icmp_seq=2 ttl=63 time=1.58 ms 2026-04-02 01:23:51.168103 | orchestrator | 64 bytes from 192.168.112.156: icmp_seq=3 ttl=63 time=1.17 ms 2026-04-02 01:23:51.168153 | orchestrator | 2026-04-02 01:23:51.168159 | orchestrator | --- 192.168.112.156 ping statistics --- 2026-04-02 01:23:51.168164 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-04-02 01:23:51.168168 | orchestrator | rtt min/avg/max/mdev = 1.168/2.183/3.798/1.154 ms 2026-04-02 01:23:51.168719 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-5 2026-04-02 01:23:52.769916 | orchestrator | 2026-04-02 01:23:52 | ERROR  | Unable to get ansible vault password 2026-04-02 01:23:52.770797 | orchestrator | 2026-04-02 01:23:52 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-02 01:23:52.770841 | orchestrator | 2026-04-02 01:23:52 | ERROR  | Dropping encrypted entries 2026-04-02 01:23:54.814947 | orchestrator | 2026-04-02 01:23:54 | INFO  | Live migrating server fb878a84-596b-4ef7-b613-574f611a21fc 2026-04-02 01:24:07.787233 | orchestrator | 2026-04-02 01:24:07 | INFO  | Live migration of fb878a84-596b-4ef7-b613-574f611a21fc (test-4) is still in progress 2026-04-02 01:24:10.193029 | orchestrator | 2026-04-02 01:24:10 | INFO  | Live migration of fb878a84-596b-4ef7-b613-574f611a21fc (test-4) is still in progress 2026-04-02 01:24:12.865624 | orchestrator | 2026-04-02 01:24:12 | INFO  | Live migration of fb878a84-596b-4ef7-b613-574f611a21fc (test-4) is still in progress 2026-04-02 01:24:15.304919 | orchestrator | 2026-04-02 01:24:15 | INFO  | Live migration of fb878a84-596b-4ef7-b613-574f611a21fc (test-4) is still in progress 2026-04-02 01:24:17.717729 | orchestrator | 2026-04-02 01:24:17 | INFO  | Live migration of fb878a84-596b-4ef7-b613-574f611a21fc (test-4) is still in progress 2026-04-02 01:24:20.261615 | orchestrator | 2026-04-02 01:24:20 | INFO  | Live migration of fb878a84-596b-4ef7-b613-574f611a21fc (test-4) is still in progress 2026-04-02 01:24:22.555067 | orchestrator | 2026-04-02 01:24:22 | INFO  | Live migration of fb878a84-596b-4ef7-b613-574f611a21fc (test-4) is still in progress 2026-04-02 01:24:24.778449 | orchestrator | 2026-04-02 01:24:24 | INFO  | Live migration of fb878a84-596b-4ef7-b613-574f611a21fc (test-4) is still in progress 2026-04-02 01:24:27.049278 | orchestrator | 2026-04-02 01:24:27 | INFO  | Live migration of fb878a84-596b-4ef7-b613-574f611a21fc (test-4) completed with status ACTIVE 2026-04-02 01:24:27.049375 | orchestrator | 2026-04-02 01:24:27 | INFO  | Live migrating server 3b511730-8d13-4182-94db-a3f11019d51a 2026-04-02 01:24:38.905977 | orchestrator | 2026-04-02 01:24:38 | INFO  | Live migration of 3b511730-8d13-4182-94db-a3f11019d51a (test) is still in progress 2026-04-02 01:24:41.266474 | orchestrator | 2026-04-02 01:24:41 | INFO  | Live migration of 3b511730-8d13-4182-94db-a3f11019d51a (test) is still in progress 2026-04-02 01:24:43.629565 | orchestrator | 2026-04-02 01:24:43 | INFO  | Live migration of 3b511730-8d13-4182-94db-a3f11019d51a (test) is still in progress 2026-04-02 01:24:45.973597 | orchestrator | 2026-04-02 01:24:45 | INFO  | Live migration of 3b511730-8d13-4182-94db-a3f11019d51a (test) is still in progress 2026-04-02 01:24:48.266119 | orchestrator | 2026-04-02 01:24:48 | INFO  | Live migration of 3b511730-8d13-4182-94db-a3f11019d51a (test) is still in progress 2026-04-02 01:24:50.590170 | orchestrator | 2026-04-02 01:24:50 | INFO  | Live migration of 3b511730-8d13-4182-94db-a3f11019d51a (test) is still in progress 2026-04-02 01:24:52.823497 | orchestrator | 2026-04-02 01:24:52 | INFO  | Live migration of 3b511730-8d13-4182-94db-a3f11019d51a (test) is still in progress 2026-04-02 01:24:55.185895 | orchestrator | 2026-04-02 01:24:55 | INFO  | Live migration of 3b511730-8d13-4182-94db-a3f11019d51a (test) is still in progress 2026-04-02 01:24:57.545246 | orchestrator | 2026-04-02 01:24:57 | INFO  | Live migration of 3b511730-8d13-4182-94db-a3f11019d51a (test) is still in progress 2026-04-02 01:24:59.778911 | orchestrator | 2026-04-02 01:24:59 | INFO  | Live migration of 3b511730-8d13-4182-94db-a3f11019d51a (test) is still in progress 2026-04-02 01:25:02.080325 | orchestrator | 2026-04-02 01:25:02 | INFO  | Live migration of 3b511730-8d13-4182-94db-a3f11019d51a (test) completed with status ACTIVE 2026-04-02 01:25:02.357417 | orchestrator | + compute_list 2026-04-02 01:25:02.357502 | orchestrator | + osism manage compute list testbed-node-3 2026-04-02 01:25:03.912499 | orchestrator | 2026-04-02 01:25:03 | ERROR  | Unable to get ansible vault password 2026-04-02 01:25:03.912578 | orchestrator | 2026-04-02 01:25:03 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-02 01:25:03.912587 | orchestrator | 2026-04-02 01:25:03 | ERROR  | Dropping encrypted entries 2026-04-02 01:25:05.611296 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-02 01:25:05.611391 | orchestrator | | ID | Name | Status | 2026-04-02 01:25:05.611400 | orchestrator | |--------------------------------------+--------+----------| 2026-04-02 01:25:05.611407 | orchestrator | | 56bf57a4-c73a-4cc8-945f-232b8bc97053 | test-3 | ACTIVE | 2026-04-02 01:25:05.611415 | orchestrator | | fb878a84-596b-4ef7-b613-574f611a21fc | test-4 | ACTIVE | 2026-04-02 01:25:05.611445 | orchestrator | | 22de0cc0-b7b9-4790-b66c-ba42f09c1396 | test-2 | ACTIVE | 2026-04-02 01:25:05.611452 | orchestrator | | 3b511730-8d13-4182-94db-a3f11019d51a | test | ACTIVE | 2026-04-02 01:25:05.611458 | orchestrator | | 65c91abc-d5a1-4fe7-8560-c0a229536185 | test-1 | ACTIVE | 2026-04-02 01:25:05.611465 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-02 01:25:05.925107 | orchestrator | + osism manage compute list testbed-node-4 2026-04-02 01:25:07.462117 | orchestrator | 2026-04-02 01:25:07 | ERROR  | Unable to get ansible vault password 2026-04-02 01:25:07.462235 | orchestrator | 2026-04-02 01:25:07 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-02 01:25:07.462245 | orchestrator | 2026-04-02 01:25:07 | ERROR  | Dropping encrypted entries 2026-04-02 01:25:08.589173 | orchestrator | +------+--------+----------+ 2026-04-02 01:25:08.589250 | orchestrator | | ID | Name | Status | 2026-04-02 01:25:08.589256 | orchestrator | |------+--------+----------| 2026-04-02 01:25:08.589260 | orchestrator | +------+--------+----------+ 2026-04-02 01:25:08.887354 | orchestrator | + osism manage compute list testbed-node-5 2026-04-02 01:25:10.482334 | orchestrator | 2026-04-02 01:25:10 | ERROR  | Unable to get ansible vault password 2026-04-02 01:25:10.482422 | orchestrator | 2026-04-02 01:25:10 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-02 01:25:10.482433 | orchestrator | 2026-04-02 01:25:10 | ERROR  | Dropping encrypted entries 2026-04-02 01:25:11.627858 | orchestrator | +------+--------+----------+ 2026-04-02 01:25:11.627954 | orchestrator | | ID | Name | Status | 2026-04-02 01:25:11.627961 | orchestrator | |------+--------+----------| 2026-04-02 01:25:11.627966 | orchestrator | +------+--------+----------+ 2026-04-02 01:25:11.943184 | orchestrator | + server_ping 2026-04-02 01:25:11.944007 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-04-02 01:25:11.944410 | orchestrator | ++ tr -d '\r' 2026-04-02 01:25:14.717863 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-02 01:25:14.717972 | orchestrator | + ping -c3 192.168.112.100 2026-04-02 01:25:14.727175 | orchestrator | PING 192.168.112.100 (192.168.112.100) 56(84) bytes of data. 2026-04-02 01:25:14.727259 | orchestrator | 64 bytes from 192.168.112.100: icmp_seq=1 ttl=63 time=7.09 ms 2026-04-02 01:25:15.723777 | orchestrator | 64 bytes from 192.168.112.100: icmp_seq=2 ttl=63 time=2.32 ms 2026-04-02 01:25:16.724580 | orchestrator | 64 bytes from 192.168.112.100: icmp_seq=3 ttl=63 time=1.46 ms 2026-04-02 01:25:16.725025 | orchestrator | 2026-04-02 01:25:16.725050 | orchestrator | --- 192.168.112.100 ping statistics --- 2026-04-02 01:25:16.725057 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-02 01:25:16.725061 | orchestrator | rtt min/avg/max/mdev = 1.461/3.620/7.085/2.474 ms 2026-04-02 01:25:16.725250 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-02 01:25:16.725265 | orchestrator | + ping -c3 192.168.112.116 2026-04-02 01:25:16.736655 | orchestrator | PING 192.168.112.116 (192.168.112.116) 56(84) bytes of data. 2026-04-02 01:25:16.736752 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=1 ttl=63 time=6.50 ms 2026-04-02 01:25:17.733725 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=2 ttl=63 time=2.21 ms 2026-04-02 01:25:18.735395 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=3 ttl=63 time=1.70 ms 2026-04-02 01:25:18.735477 | orchestrator | 2026-04-02 01:25:18.735484 | orchestrator | --- 192.168.112.116 ping statistics --- 2026-04-02 01:25:18.735490 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-02 01:25:18.735495 | orchestrator | rtt min/avg/max/mdev = 1.697/3.471/6.504/2.154 ms 2026-04-02 01:25:18.735500 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-02 01:25:18.735504 | orchestrator | + ping -c3 192.168.112.103 2026-04-02 01:25:18.749088 | orchestrator | PING 192.168.112.103 (192.168.112.103) 56(84) bytes of data. 2026-04-02 01:25:18.749191 | orchestrator | 64 bytes from 192.168.112.103: icmp_seq=1 ttl=63 time=9.35 ms 2026-04-02 01:25:19.743527 | orchestrator | 64 bytes from 192.168.112.103: icmp_seq=2 ttl=63 time=1.90 ms 2026-04-02 01:25:20.744182 | orchestrator | 64 bytes from 192.168.112.103: icmp_seq=3 ttl=63 time=1.18 ms 2026-04-02 01:25:20.744255 | orchestrator | 2026-04-02 01:25:20.744268 | orchestrator | --- 192.168.112.103 ping statistics --- 2026-04-02 01:25:20.744278 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-02 01:25:20.744286 | orchestrator | rtt min/avg/max/mdev = 1.181/4.144/9.350/3.692 ms 2026-04-02 01:25:20.744776 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-02 01:25:20.744838 | orchestrator | + ping -c3 192.168.112.133 2026-04-02 01:25:20.752723 | orchestrator | PING 192.168.112.133 (192.168.112.133) 56(84) bytes of data. 2026-04-02 01:25:20.752805 | orchestrator | 64 bytes from 192.168.112.133: icmp_seq=1 ttl=63 time=4.75 ms 2026-04-02 01:25:21.750420 | orchestrator | 64 bytes from 192.168.112.133: icmp_seq=2 ttl=63 time=2.04 ms 2026-04-02 01:25:22.751319 | orchestrator | 64 bytes from 192.168.112.133: icmp_seq=3 ttl=63 time=1.49 ms 2026-04-02 01:25:22.751735 | orchestrator | 2026-04-02 01:25:22.751814 | orchestrator | --- 192.168.112.133 ping statistics --- 2026-04-02 01:25:22.751831 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-04-02 01:25:22.751840 | orchestrator | rtt min/avg/max/mdev = 1.486/2.757/4.747/1.424 ms 2026-04-02 01:25:22.752119 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-02 01:25:22.752147 | orchestrator | + ping -c3 192.168.112.156 2026-04-02 01:25:22.761876 | orchestrator | PING 192.168.112.156 (192.168.112.156) 56(84) bytes of data. 2026-04-02 01:25:22.762059 | orchestrator | 64 bytes from 192.168.112.156: icmp_seq=1 ttl=63 time=5.30 ms 2026-04-02 01:25:23.760940 | orchestrator | 64 bytes from 192.168.112.156: icmp_seq=2 ttl=63 time=2.72 ms 2026-04-02 01:25:24.762658 | orchestrator | 64 bytes from 192.168.112.156: icmp_seq=3 ttl=63 time=2.27 ms 2026-04-02 01:25:24.762759 | orchestrator | 2026-04-02 01:25:24.762770 | orchestrator | --- 192.168.112.156 ping statistics --- 2026-04-02 01:25:24.762778 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-02 01:25:24.762785 | orchestrator | rtt min/avg/max/mdev = 2.268/3.429/5.296/1.333 ms 2026-04-02 01:25:24.763290 | orchestrator | + osism manage compute migrate --yes --target testbed-node-4 testbed-node-3 2026-04-02 01:25:26.385099 | orchestrator | 2026-04-02 01:25:26 | ERROR  | Unable to get ansible vault password 2026-04-02 01:25:26.385192 | orchestrator | 2026-04-02 01:25:26 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-02 01:25:26.385200 | orchestrator | 2026-04-02 01:25:26 | ERROR  | Dropping encrypted entries 2026-04-02 01:25:28.229032 | orchestrator | 2026-04-02 01:25:28 | INFO  | Live migrating server 56bf57a4-c73a-4cc8-945f-232b8bc97053 2026-04-02 01:25:40.249691 | orchestrator | 2026-04-02 01:25:40 | INFO  | Live migration of 56bf57a4-c73a-4cc8-945f-232b8bc97053 (test-3) is still in progress 2026-04-02 01:25:42.630649 | orchestrator | 2026-04-02 01:25:42 | INFO  | Live migration of 56bf57a4-c73a-4cc8-945f-232b8bc97053 (test-3) is still in progress 2026-04-02 01:25:45.048808 | orchestrator | 2026-04-02 01:25:45 | INFO  | Live migration of 56bf57a4-c73a-4cc8-945f-232b8bc97053 (test-3) is still in progress 2026-04-02 01:25:47.492744 | orchestrator | 2026-04-02 01:25:47 | INFO  | Live migration of 56bf57a4-c73a-4cc8-945f-232b8bc97053 (test-3) is still in progress 2026-04-02 01:25:49.699301 | orchestrator | 2026-04-02 01:25:49 | INFO  | Live migration of 56bf57a4-c73a-4cc8-945f-232b8bc97053 (test-3) is still in progress 2026-04-02 01:25:51.964564 | orchestrator | 2026-04-02 01:25:51 | INFO  | Live migration of 56bf57a4-c73a-4cc8-945f-232b8bc97053 (test-3) is still in progress 2026-04-02 01:25:54.295298 | orchestrator | 2026-04-02 01:25:54 | INFO  | Live migration of 56bf57a4-c73a-4cc8-945f-232b8bc97053 (test-3) is still in progress 2026-04-02 01:25:56.583904 | orchestrator | 2026-04-02 01:25:56 | INFO  | Live migration of 56bf57a4-c73a-4cc8-945f-232b8bc97053 (test-3) is still in progress 2026-04-02 01:25:58.820257 | orchestrator | 2026-04-02 01:25:58 | INFO  | Live migration of 56bf57a4-c73a-4cc8-945f-232b8bc97053 (test-3) is still in progress 2026-04-02 01:26:01.081681 | orchestrator | 2026-04-02 01:26:01 | INFO  | Live migration of 56bf57a4-c73a-4cc8-945f-232b8bc97053 (test-3) is still in progress 2026-04-02 01:26:03.622933 | orchestrator | 2026-04-02 01:26:03 | INFO  | Live migration of 56bf57a4-c73a-4cc8-945f-232b8bc97053 (test-3) is still in progress 2026-04-02 01:26:05.958220 | orchestrator | 2026-04-02 01:26:05 | INFO  | Live migration of 56bf57a4-c73a-4cc8-945f-232b8bc97053 (test-3) completed with status ACTIVE 2026-04-02 01:26:05.958306 | orchestrator | 2026-04-02 01:26:05 | INFO  | Live migrating server fb878a84-596b-4ef7-b613-574f611a21fc 2026-04-02 01:26:17.588091 | orchestrator | 2026-04-02 01:26:17 | INFO  | Live migration of fb878a84-596b-4ef7-b613-574f611a21fc (test-4) is still in progress 2026-04-02 01:26:19.942065 | orchestrator | 2026-04-02 01:26:19 | INFO  | Live migration of fb878a84-596b-4ef7-b613-574f611a21fc (test-4) is still in progress 2026-04-02 01:26:22.325837 | orchestrator | 2026-04-02 01:26:22 | INFO  | Live migration of fb878a84-596b-4ef7-b613-574f611a21fc (test-4) is still in progress 2026-04-02 01:26:24.858596 | orchestrator | 2026-04-02 01:26:24 | INFO  | Live migration of fb878a84-596b-4ef7-b613-574f611a21fc (test-4) is still in progress 2026-04-02 01:26:27.154145 | orchestrator | 2026-04-02 01:26:27 | INFO  | Live migration of fb878a84-596b-4ef7-b613-574f611a21fc (test-4) is still in progress 2026-04-02 01:26:29.450423 | orchestrator | 2026-04-02 01:26:29 | INFO  | Live migration of fb878a84-596b-4ef7-b613-574f611a21fc (test-4) is still in progress 2026-04-02 01:26:31.833068 | orchestrator | 2026-04-02 01:26:31 | INFO  | Live migration of fb878a84-596b-4ef7-b613-574f611a21fc (test-4) is still in progress 2026-04-02 01:26:34.179477 | orchestrator | 2026-04-02 01:26:34 | INFO  | Live migration of fb878a84-596b-4ef7-b613-574f611a21fc (test-4) is still in progress 2026-04-02 01:26:36.482217 | orchestrator | 2026-04-02 01:26:36 | INFO  | Live migration of fb878a84-596b-4ef7-b613-574f611a21fc (test-4) is still in progress 2026-04-02 01:26:38.873778 | orchestrator | 2026-04-02 01:26:38 | INFO  | Live migration of fb878a84-596b-4ef7-b613-574f611a21fc (test-4) completed with status ACTIVE 2026-04-02 01:26:38.873853 | orchestrator | 2026-04-02 01:26:38 | INFO  | Live migrating server 22de0cc0-b7b9-4790-b66c-ba42f09c1396 2026-04-02 01:26:51.440637 | orchestrator | 2026-04-02 01:26:51 | INFO  | Live migration of 22de0cc0-b7b9-4790-b66c-ba42f09c1396 (test-2) is still in progress 2026-04-02 01:26:53.818221 | orchestrator | 2026-04-02 01:26:53 | INFO  | Live migration of 22de0cc0-b7b9-4790-b66c-ba42f09c1396 (test-2) is still in progress 2026-04-02 01:26:56.174862 | orchestrator | 2026-04-02 01:26:56 | INFO  | Live migration of 22de0cc0-b7b9-4790-b66c-ba42f09c1396 (test-2) is still in progress 2026-04-02 01:26:58.559287 | orchestrator | 2026-04-02 01:26:58 | INFO  | Live migration of 22de0cc0-b7b9-4790-b66c-ba42f09c1396 (test-2) is still in progress 2026-04-02 01:27:00.820773 | orchestrator | 2026-04-02 01:27:00 | INFO  | Live migration of 22de0cc0-b7b9-4790-b66c-ba42f09c1396 (test-2) is still in progress 2026-04-02 01:27:03.169200 | orchestrator | 2026-04-02 01:27:03 | INFO  | Live migration of 22de0cc0-b7b9-4790-b66c-ba42f09c1396 (test-2) is still in progress 2026-04-02 01:27:05.459519 | orchestrator | 2026-04-02 01:27:05 | INFO  | Live migration of 22de0cc0-b7b9-4790-b66c-ba42f09c1396 (test-2) is still in progress 2026-04-02 01:27:07.792644 | orchestrator | 2026-04-02 01:27:07 | INFO  | Live migration of 22de0cc0-b7b9-4790-b66c-ba42f09c1396 (test-2) is still in progress 2026-04-02 01:27:10.126943 | orchestrator | 2026-04-02 01:27:10 | INFO  | Live migration of 22de0cc0-b7b9-4790-b66c-ba42f09c1396 (test-2) completed with status ACTIVE 2026-04-02 01:27:10.127022 | orchestrator | 2026-04-02 01:27:10 | INFO  | Live migrating server 3b511730-8d13-4182-94db-a3f11019d51a 2026-04-02 01:27:22.528696 | orchestrator | 2026-04-02 01:27:22 | INFO  | Live migration of 3b511730-8d13-4182-94db-a3f11019d51a (test) is still in progress 2026-04-02 01:27:24.808125 | orchestrator | 2026-04-02 01:27:24 | INFO  | Live migration of 3b511730-8d13-4182-94db-a3f11019d51a (test) is still in progress 2026-04-02 01:27:27.076691 | orchestrator | 2026-04-02 01:27:27 | INFO  | Live migration of 3b511730-8d13-4182-94db-a3f11019d51a (test) is still in progress 2026-04-02 01:27:29.454282 | orchestrator | 2026-04-02 01:27:29 | INFO  | Live migration of 3b511730-8d13-4182-94db-a3f11019d51a (test) is still in progress 2026-04-02 01:27:31.759153 | orchestrator | 2026-04-02 01:27:31 | INFO  | Live migration of 3b511730-8d13-4182-94db-a3f11019d51a (test) is still in progress 2026-04-02 01:27:34.070881 | orchestrator | 2026-04-02 01:27:34 | INFO  | Live migration of 3b511730-8d13-4182-94db-a3f11019d51a (test) is still in progress 2026-04-02 01:27:36.484132 | orchestrator | 2026-04-02 01:27:36 | INFO  | Live migration of 3b511730-8d13-4182-94db-a3f11019d51a (test) is still in progress 2026-04-02 01:27:38.725593 | orchestrator | 2026-04-02 01:27:38 | INFO  | Live migration of 3b511730-8d13-4182-94db-a3f11019d51a (test) is still in progress 2026-04-02 01:27:41.050490 | orchestrator | 2026-04-02 01:27:41 | INFO  | Live migration of 3b511730-8d13-4182-94db-a3f11019d51a (test) is still in progress 2026-04-02 01:27:43.411241 | orchestrator | 2026-04-02 01:27:43 | INFO  | Live migration of 3b511730-8d13-4182-94db-a3f11019d51a (test) completed with status ACTIVE 2026-04-02 01:27:43.411331 | orchestrator | 2026-04-02 01:27:43 | INFO  | Live migrating server 65c91abc-d5a1-4fe7-8560-c0a229536185 2026-04-02 01:27:54.936083 | orchestrator | 2026-04-02 01:27:54 | INFO  | Live migration of 65c91abc-d5a1-4fe7-8560-c0a229536185 (test-1) is still in progress 2026-04-02 01:27:57.331296 | orchestrator | 2026-04-02 01:27:57 | INFO  | Live migration of 65c91abc-d5a1-4fe7-8560-c0a229536185 (test-1) is still in progress 2026-04-02 01:27:59.638968 | orchestrator | 2026-04-02 01:27:59 | INFO  | Live migration of 65c91abc-d5a1-4fe7-8560-c0a229536185 (test-1) is still in progress 2026-04-02 01:28:01.935980 | orchestrator | 2026-04-02 01:28:01 | INFO  | Live migration of 65c91abc-d5a1-4fe7-8560-c0a229536185 (test-1) is still in progress 2026-04-02 01:28:04.233925 | orchestrator | 2026-04-02 01:28:04 | INFO  | Live migration of 65c91abc-d5a1-4fe7-8560-c0a229536185 (test-1) is still in progress 2026-04-02 01:28:06.521430 | orchestrator | 2026-04-02 01:28:06 | INFO  | Live migration of 65c91abc-d5a1-4fe7-8560-c0a229536185 (test-1) is still in progress 2026-04-02 01:28:08.896339 | orchestrator | 2026-04-02 01:28:08 | INFO  | Live migration of 65c91abc-d5a1-4fe7-8560-c0a229536185 (test-1) is still in progress 2026-04-02 01:28:11.152838 | orchestrator | 2026-04-02 01:28:11 | INFO  | Live migration of 65c91abc-d5a1-4fe7-8560-c0a229536185 (test-1) is still in progress 2026-04-02 01:28:13.475594 | orchestrator | 2026-04-02 01:28:13 | INFO  | Live migration of 65c91abc-d5a1-4fe7-8560-c0a229536185 (test-1) completed with status ACTIVE 2026-04-02 01:28:13.761137 | orchestrator | + compute_list 2026-04-02 01:28:13.761241 | orchestrator | + osism manage compute list testbed-node-3 2026-04-02 01:28:15.351412 | orchestrator | 2026-04-02 01:28:15 | ERROR  | Unable to get ansible vault password 2026-04-02 01:28:15.351523 | orchestrator | 2026-04-02 01:28:15 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-02 01:28:15.351533 | orchestrator | 2026-04-02 01:28:15 | ERROR  | Dropping encrypted entries 2026-04-02 01:28:16.520616 | orchestrator | +------+--------+----------+ 2026-04-02 01:28:16.520762 | orchestrator | | ID | Name | Status | 2026-04-02 01:28:16.520785 | orchestrator | |------+--------+----------| 2026-04-02 01:28:16.520802 | orchestrator | +------+--------+----------+ 2026-04-02 01:28:16.801038 | orchestrator | + osism manage compute list testbed-node-4 2026-04-02 01:28:18.336386 | orchestrator | 2026-04-02 01:28:18 | ERROR  | Unable to get ansible vault password 2026-04-02 01:28:18.336447 | orchestrator | 2026-04-02 01:28:18 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-02 01:28:18.336456 | orchestrator | 2026-04-02 01:28:18 | ERROR  | Dropping encrypted entries 2026-04-02 01:28:20.010593 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-02 01:28:20.010809 | orchestrator | | ID | Name | Status | 2026-04-02 01:28:20.010846 | orchestrator | |--------------------------------------+--------+----------| 2026-04-02 01:28:20.010852 | orchestrator | | 56bf57a4-c73a-4cc8-945f-232b8bc97053 | test-3 | ACTIVE | 2026-04-02 01:28:20.010858 | orchestrator | | fb878a84-596b-4ef7-b613-574f611a21fc | test-4 | ACTIVE | 2026-04-02 01:28:20.010866 | orchestrator | | 22de0cc0-b7b9-4790-b66c-ba42f09c1396 | test-2 | ACTIVE | 2026-04-02 01:28:20.010872 | orchestrator | | 3b511730-8d13-4182-94db-a3f11019d51a | test | ACTIVE | 2026-04-02 01:28:20.010878 | orchestrator | | 65c91abc-d5a1-4fe7-8560-c0a229536185 | test-1 | ACTIVE | 2026-04-02 01:28:20.010884 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-02 01:28:20.293110 | orchestrator | + osism manage compute list testbed-node-5 2026-04-02 01:28:21.812833 | orchestrator | 2026-04-02 01:28:21 | ERROR  | Unable to get ansible vault password 2026-04-02 01:28:21.812916 | orchestrator | 2026-04-02 01:28:21 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-02 01:28:21.812925 | orchestrator | 2026-04-02 01:28:21 | ERROR  | Dropping encrypted entries 2026-04-02 01:28:22.915101 | orchestrator | +------+--------+----------+ 2026-04-02 01:28:22.915178 | orchestrator | | ID | Name | Status | 2026-04-02 01:28:22.915184 | orchestrator | |------+--------+----------| 2026-04-02 01:28:22.915189 | orchestrator | +------+--------+----------+ 2026-04-02 01:28:23.203576 | orchestrator | + server_ping 2026-04-02 01:28:23.204612 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-04-02 01:28:23.205167 | orchestrator | ++ tr -d '\r' 2026-04-02 01:28:26.004453 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-02 01:28:26.004547 | orchestrator | + ping -c3 192.168.112.100 2026-04-02 01:28:26.018316 | orchestrator | PING 192.168.112.100 (192.168.112.100) 56(84) bytes of data. 2026-04-02 01:28:26.018415 | orchestrator | 64 bytes from 192.168.112.100: icmp_seq=1 ttl=63 time=10.0 ms 2026-04-02 01:28:27.012712 | orchestrator | 64 bytes from 192.168.112.100: icmp_seq=2 ttl=63 time=2.99 ms 2026-04-02 01:28:28.013769 | orchestrator | 64 bytes from 192.168.112.100: icmp_seq=3 ttl=63 time=2.25 ms 2026-04-02 01:28:28.013849 | orchestrator | 2026-04-02 01:28:28.013859 | orchestrator | --- 192.168.112.100 ping statistics --- 2026-04-02 01:28:28.013866 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-02 01:28:28.013873 | orchestrator | rtt min/avg/max/mdev = 2.249/5.097/10.048/3.514 ms 2026-04-02 01:28:28.013881 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-02 01:28:28.013888 | orchestrator | + ping -c3 192.168.112.116 2026-04-02 01:28:28.030149 | orchestrator | PING 192.168.112.116 (192.168.112.116) 56(84) bytes of data. 2026-04-02 01:28:28.030268 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=1 ttl=63 time=11.9 ms 2026-04-02 01:28:29.021009 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=2 ttl=63 time=1.69 ms 2026-04-02 01:28:30.023924 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=3 ttl=63 time=1.93 ms 2026-04-02 01:28:30.024014 | orchestrator | 2026-04-02 01:28:30.024024 | orchestrator | --- 192.168.112.116 ping statistics --- 2026-04-02 01:28:30.024032 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-02 01:28:30.024038 | orchestrator | rtt min/avg/max/mdev = 1.691/5.161/11.859/4.737 ms 2026-04-02 01:28:30.025859 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-02 01:28:30.025921 | orchestrator | + ping -c3 192.168.112.103 2026-04-02 01:28:30.042996 | orchestrator | PING 192.168.112.103 (192.168.112.103) 56(84) bytes of data. 2026-04-02 01:28:30.043076 | orchestrator | 64 bytes from 192.168.112.103: icmp_seq=1 ttl=63 time=14.5 ms 2026-04-02 01:28:31.032478 | orchestrator | 64 bytes from 192.168.112.103: icmp_seq=2 ttl=63 time=2.19 ms 2026-04-02 01:28:32.034098 | orchestrator | 64 bytes from 192.168.112.103: icmp_seq=3 ttl=63 time=1.97 ms 2026-04-02 01:28:32.034195 | orchestrator | 2026-04-02 01:28:32.034204 | orchestrator | --- 192.168.112.103 ping statistics --- 2026-04-02 01:28:32.034210 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-02 01:28:32.034215 | orchestrator | rtt min/avg/max/mdev = 1.968/6.213/14.485/5.849 ms 2026-04-02 01:28:32.034463 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-02 01:28:32.034485 | orchestrator | + ping -c3 192.168.112.133 2026-04-02 01:28:32.044610 | orchestrator | PING 192.168.112.133 (192.168.112.133) 56(84) bytes of data. 2026-04-02 01:28:32.044729 | orchestrator | 64 bytes from 192.168.112.133: icmp_seq=1 ttl=63 time=5.72 ms 2026-04-02 01:28:33.042921 | orchestrator | 64 bytes from 192.168.112.133: icmp_seq=2 ttl=63 time=2.05 ms 2026-04-02 01:28:34.044138 | orchestrator | 64 bytes from 192.168.112.133: icmp_seq=3 ttl=63 time=1.62 ms 2026-04-02 01:28:34.044224 | orchestrator | 2026-04-02 01:28:34.044235 | orchestrator | --- 192.168.112.133 ping statistics --- 2026-04-02 01:28:34.044243 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-02 01:28:34.044250 | orchestrator | rtt min/avg/max/mdev = 1.616/3.129/5.721/1.841 ms 2026-04-02 01:28:34.044257 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-02 01:28:34.044264 | orchestrator | + ping -c3 192.168.112.156 2026-04-02 01:28:34.055125 | orchestrator | PING 192.168.112.156 (192.168.112.156) 56(84) bytes of data. 2026-04-02 01:28:34.055200 | orchestrator | 64 bytes from 192.168.112.156: icmp_seq=1 ttl=63 time=6.56 ms 2026-04-02 01:28:35.050842 | orchestrator | 64 bytes from 192.168.112.156: icmp_seq=2 ttl=63 time=1.36 ms 2026-04-02 01:28:36.052609 | orchestrator | 64 bytes from 192.168.112.156: icmp_seq=3 ttl=63 time=1.09 ms 2026-04-02 01:28:36.052666 | orchestrator | 2026-04-02 01:28:36.052673 | orchestrator | --- 192.168.112.156 ping statistics --- 2026-04-02 01:28:36.052678 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-04-02 01:28:36.052682 | orchestrator | rtt min/avg/max/mdev = 1.094/3.006/6.564/2.517 ms 2026-04-02 01:28:36.052686 | orchestrator | + osism manage compute migrate --yes --target testbed-node-5 testbed-node-4 2026-04-02 01:28:37.609288 | orchestrator | 2026-04-02 01:28:37 | ERROR  | Unable to get ansible vault password 2026-04-02 01:28:37.609376 | orchestrator | 2026-04-02 01:28:37 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-02 01:28:37.609388 | orchestrator | 2026-04-02 01:28:37 | ERROR  | Dropping encrypted entries 2026-04-02 01:28:39.012841 | orchestrator | 2026-04-02 01:28:39 | INFO  | Live migrating server 56bf57a4-c73a-4cc8-945f-232b8bc97053 2026-04-02 01:28:48.564596 | orchestrator | 2026-04-02 01:28:48 | INFO  | Live migration of 56bf57a4-c73a-4cc8-945f-232b8bc97053 (test-3) is still in progress 2026-04-02 01:28:50.985334 | orchestrator | 2026-04-02 01:28:50 | INFO  | Live migration of 56bf57a4-c73a-4cc8-945f-232b8bc97053 (test-3) is still in progress 2026-04-02 01:28:53.345516 | orchestrator | 2026-04-02 01:28:53 | INFO  | Live migration of 56bf57a4-c73a-4cc8-945f-232b8bc97053 (test-3) is still in progress 2026-04-02 01:28:55.630216 | orchestrator | 2026-04-02 01:28:55 | INFO  | Live migration of 56bf57a4-c73a-4cc8-945f-232b8bc97053 (test-3) is still in progress 2026-04-02 01:28:57.921909 | orchestrator | 2026-04-02 01:28:57 | INFO  | Live migration of 56bf57a4-c73a-4cc8-945f-232b8bc97053 (test-3) is still in progress 2026-04-02 01:29:00.164958 | orchestrator | 2026-04-02 01:29:00 | INFO  | Live migration of 56bf57a4-c73a-4cc8-945f-232b8bc97053 (test-3) is still in progress 2026-04-02 01:29:02.431013 | orchestrator | 2026-04-02 01:29:02 | INFO  | Live migration of 56bf57a4-c73a-4cc8-945f-232b8bc97053 (test-3) is still in progress 2026-04-02 01:29:04.713903 | orchestrator | 2026-04-02 01:29:04 | INFO  | Live migration of 56bf57a4-c73a-4cc8-945f-232b8bc97053 (test-3) is still in progress 2026-04-02 01:29:06.977636 | orchestrator | 2026-04-02 01:29:06 | INFO  | Live migration of 56bf57a4-c73a-4cc8-945f-232b8bc97053 (test-3) completed with status ACTIVE 2026-04-02 01:29:06.977746 | orchestrator | 2026-04-02 01:29:06 | INFO  | Live migrating server fb878a84-596b-4ef7-b613-574f611a21fc 2026-04-02 01:29:18.759181 | orchestrator | 2026-04-02 01:29:18 | INFO  | Live migration of fb878a84-596b-4ef7-b613-574f611a21fc (test-4) is still in progress 2026-04-02 01:29:21.070732 | orchestrator | 2026-04-02 01:29:21 | INFO  | Live migration of fb878a84-596b-4ef7-b613-574f611a21fc (test-4) is still in progress 2026-04-02 01:29:23.422164 | orchestrator | 2026-04-02 01:29:23 | INFO  | Live migration of fb878a84-596b-4ef7-b613-574f611a21fc (test-4) is still in progress 2026-04-02 01:29:25.738211 | orchestrator | 2026-04-02 01:29:25 | INFO  | Live migration of fb878a84-596b-4ef7-b613-574f611a21fc (test-4) is still in progress 2026-04-02 01:29:28.044152 | orchestrator | 2026-04-02 01:29:28 | INFO  | Live migration of fb878a84-596b-4ef7-b613-574f611a21fc (test-4) is still in progress 2026-04-02 01:29:30.381379 | orchestrator | 2026-04-02 01:29:30 | INFO  | Live migration of fb878a84-596b-4ef7-b613-574f611a21fc (test-4) is still in progress 2026-04-02 01:29:32.744696 | orchestrator | 2026-04-02 01:29:32 | INFO  | Live migration of fb878a84-596b-4ef7-b613-574f611a21fc (test-4) is still in progress 2026-04-02 01:29:34.938443 | orchestrator | 2026-04-02 01:29:34 | INFO  | Live migration of fb878a84-596b-4ef7-b613-574f611a21fc (test-4) is still in progress 2026-04-02 01:29:37.245725 | orchestrator | 2026-04-02 01:29:37 | INFO  | Live migration of fb878a84-596b-4ef7-b613-574f611a21fc (test-4) completed with status ACTIVE 2026-04-02 01:29:37.245800 | orchestrator | 2026-04-02 01:29:37 | INFO  | Live migrating server 22de0cc0-b7b9-4790-b66c-ba42f09c1396 2026-04-02 01:29:46.473277 | orchestrator | 2026-04-02 01:29:46 | INFO  | Live migration of 22de0cc0-b7b9-4790-b66c-ba42f09c1396 (test-2) is still in progress 2026-04-02 01:29:48.767917 | orchestrator | 2026-04-02 01:29:48 | INFO  | Live migration of 22de0cc0-b7b9-4790-b66c-ba42f09c1396 (test-2) is still in progress 2026-04-02 01:29:51.044094 | orchestrator | 2026-04-02 01:29:51 | INFO  | Live migration of 22de0cc0-b7b9-4790-b66c-ba42f09c1396 (test-2) is still in progress 2026-04-02 01:29:53.316596 | orchestrator | 2026-04-02 01:29:53 | INFO  | Live migration of 22de0cc0-b7b9-4790-b66c-ba42f09c1396 (test-2) is still in progress 2026-04-02 01:29:55.620253 | orchestrator | 2026-04-02 01:29:55 | INFO  | Live migration of 22de0cc0-b7b9-4790-b66c-ba42f09c1396 (test-2) is still in progress 2026-04-02 01:29:57.919449 | orchestrator | 2026-04-02 01:29:57 | INFO  | Live migration of 22de0cc0-b7b9-4790-b66c-ba42f09c1396 (test-2) is still in progress 2026-04-02 01:30:00.153189 | orchestrator | 2026-04-02 01:30:00 | INFO  | Live migration of 22de0cc0-b7b9-4790-b66c-ba42f09c1396 (test-2) is still in progress 2026-04-02 01:30:02.373057 | orchestrator | 2026-04-02 01:30:02 | INFO  | Live migration of 22de0cc0-b7b9-4790-b66c-ba42f09c1396 (test-2) is still in progress 2026-04-02 01:30:04.655115 | orchestrator | 2026-04-02 01:30:04 | INFO  | Live migration of 22de0cc0-b7b9-4790-b66c-ba42f09c1396 (test-2) completed with status ACTIVE 2026-04-02 01:30:04.655177 | orchestrator | 2026-04-02 01:30:04 | INFO  | Live migrating server 3b511730-8d13-4182-94db-a3f11019d51a 2026-04-02 01:30:14.828600 | orchestrator | 2026-04-02 01:30:14 | INFO  | Live migration of 3b511730-8d13-4182-94db-a3f11019d51a (test) is still in progress 2026-04-02 01:30:17.259559 | orchestrator | 2026-04-02 01:30:17 | INFO  | Live migration of 3b511730-8d13-4182-94db-a3f11019d51a (test) is still in progress 2026-04-02 01:30:19.662093 | orchestrator | 2026-04-02 01:30:19 | INFO  | Live migration of 3b511730-8d13-4182-94db-a3f11019d51a (test) is still in progress 2026-04-02 01:30:22.005541 | orchestrator | 2026-04-02 01:30:22 | INFO  | Live migration of 3b511730-8d13-4182-94db-a3f11019d51a (test) is still in progress 2026-04-02 01:30:24.275663 | orchestrator | 2026-04-02 01:30:24 | INFO  | Live migration of 3b511730-8d13-4182-94db-a3f11019d51a (test) is still in progress 2026-04-02 01:30:26.591401 | orchestrator | 2026-04-02 01:30:26 | INFO  | Live migration of 3b511730-8d13-4182-94db-a3f11019d51a (test) is still in progress 2026-04-02 01:30:28.847599 | orchestrator | 2026-04-02 01:30:28 | INFO  | Live migration of 3b511730-8d13-4182-94db-a3f11019d51a (test) is still in progress 2026-04-02 01:30:31.068212 | orchestrator | 2026-04-02 01:30:31 | INFO  | Live migration of 3b511730-8d13-4182-94db-a3f11019d51a (test) is still in progress 2026-04-02 01:30:33.392471 | orchestrator | 2026-04-02 01:30:33 | INFO  | Live migration of 3b511730-8d13-4182-94db-a3f11019d51a (test) is still in progress 2026-04-02 01:30:35.773510 | orchestrator | 2026-04-02 01:30:35 | INFO  | Live migration of 3b511730-8d13-4182-94db-a3f11019d51a (test) is still in progress 2026-04-02 01:30:38.005318 | orchestrator | 2026-04-02 01:30:38 | INFO  | Live migration of 3b511730-8d13-4182-94db-a3f11019d51a (test) completed with status ACTIVE 2026-04-02 01:30:38.007309 | orchestrator | 2026-04-02 01:30:38 | INFO  | Live migrating server 65c91abc-d5a1-4fe7-8560-c0a229536185 2026-04-02 01:30:47.643561 | orchestrator | 2026-04-02 01:30:47 | INFO  | Live migration of 65c91abc-d5a1-4fe7-8560-c0a229536185 (test-1) is still in progress 2026-04-02 01:30:50.160403 | orchestrator | 2026-04-02 01:30:50 | INFO  | Live migration of 65c91abc-d5a1-4fe7-8560-c0a229536185 (test-1) is still in progress 2026-04-02 01:30:52.655596 | orchestrator | 2026-04-02 01:30:52 | INFO  | Live migration of 65c91abc-d5a1-4fe7-8560-c0a229536185 (test-1) is still in progress 2026-04-02 01:30:54.873860 | orchestrator | 2026-04-02 01:30:54 | INFO  | Live migration of 65c91abc-d5a1-4fe7-8560-c0a229536185 (test-1) is still in progress 2026-04-02 01:30:57.203951 | orchestrator | 2026-04-02 01:30:57 | INFO  | Live migration of 65c91abc-d5a1-4fe7-8560-c0a229536185 (test-1) is still in progress 2026-04-02 01:30:59.517928 | orchestrator | 2026-04-02 01:30:59 | INFO  | Live migration of 65c91abc-d5a1-4fe7-8560-c0a229536185 (test-1) is still in progress 2026-04-02 01:31:01.741112 | orchestrator | 2026-04-02 01:31:01 | INFO  | Live migration of 65c91abc-d5a1-4fe7-8560-c0a229536185 (test-1) is still in progress 2026-04-02 01:31:03.972456 | orchestrator | 2026-04-02 01:31:03 | INFO  | Live migration of 65c91abc-d5a1-4fe7-8560-c0a229536185 (test-1) is still in progress 2026-04-02 01:31:06.269481 | orchestrator | 2026-04-02 01:31:06 | INFO  | Live migration of 65c91abc-d5a1-4fe7-8560-c0a229536185 (test-1) completed with status ACTIVE 2026-04-02 01:31:06.525832 | orchestrator | + compute_list 2026-04-02 01:31:06.525920 | orchestrator | + osism manage compute list testbed-node-3 2026-04-02 01:31:08.021586 | orchestrator | 2026-04-02 01:31:08 | ERROR  | Unable to get ansible vault password 2026-04-02 01:31:08.021675 | orchestrator | 2026-04-02 01:31:08 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-02 01:31:08.021686 | orchestrator | 2026-04-02 01:31:08 | ERROR  | Dropping encrypted entries 2026-04-02 01:31:09.244934 | orchestrator | +------+--------+----------+ 2026-04-02 01:31:09.245040 | orchestrator | | ID | Name | Status | 2026-04-02 01:31:09.245049 | orchestrator | |------+--------+----------| 2026-04-02 01:31:09.245057 | orchestrator | +------+--------+----------+ 2026-04-02 01:31:09.550426 | orchestrator | + osism manage compute list testbed-node-4 2026-04-02 01:31:11.192769 | orchestrator | 2026-04-02 01:31:11 | ERROR  | Unable to get ansible vault password 2026-04-02 01:31:11.192842 | orchestrator | 2026-04-02 01:31:11 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-02 01:31:11.192851 | orchestrator | 2026-04-02 01:31:11 | ERROR  | Dropping encrypted entries 2026-04-02 01:31:12.181800 | orchestrator | +------+--------+----------+ 2026-04-02 01:31:12.181869 | orchestrator | | ID | Name | Status | 2026-04-02 01:31:12.181879 | orchestrator | |------+--------+----------| 2026-04-02 01:31:12.181886 | orchestrator | +------+--------+----------+ 2026-04-02 01:31:12.449836 | orchestrator | + osism manage compute list testbed-node-5 2026-04-02 01:31:14.002932 | orchestrator | 2026-04-02 01:31:14 | ERROR  | Unable to get ansible vault password 2026-04-02 01:31:14.002986 | orchestrator | 2026-04-02 01:31:14 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-02 01:31:14.002995 | orchestrator | 2026-04-02 01:31:14 | ERROR  | Dropping encrypted entries 2026-04-02 01:31:15.390862 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-02 01:31:15.390923 | orchestrator | | ID | Name | Status | 2026-04-02 01:31:15.390931 | orchestrator | |--------------------------------------+--------+----------| 2026-04-02 01:31:15.390936 | orchestrator | | 56bf57a4-c73a-4cc8-945f-232b8bc97053 | test-3 | ACTIVE | 2026-04-02 01:31:15.390942 | orchestrator | | fb878a84-596b-4ef7-b613-574f611a21fc | test-4 | ACTIVE | 2026-04-02 01:31:15.390947 | orchestrator | | 22de0cc0-b7b9-4790-b66c-ba42f09c1396 | test-2 | ACTIVE | 2026-04-02 01:31:15.390953 | orchestrator | | 3b511730-8d13-4182-94db-a3f11019d51a | test | ACTIVE | 2026-04-02 01:31:15.390958 | orchestrator | | 65c91abc-d5a1-4fe7-8560-c0a229536185 | test-1 | ACTIVE | 2026-04-02 01:31:15.390964 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-02 01:31:15.687124 | orchestrator | + server_ping 2026-04-02 01:31:15.688348 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-04-02 01:31:15.688839 | orchestrator | ++ tr -d '\r' 2026-04-02 01:31:18.591318 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-02 01:31:18.591413 | orchestrator | + ping -c3 192.168.112.100 2026-04-02 01:31:18.603064 | orchestrator | PING 192.168.112.100 (192.168.112.100) 56(84) bytes of data. 2026-04-02 01:31:18.603237 | orchestrator | 64 bytes from 192.168.112.100: icmp_seq=1 ttl=63 time=9.90 ms 2026-04-02 01:31:19.597021 | orchestrator | 64 bytes from 192.168.112.100: icmp_seq=2 ttl=63 time=2.54 ms 2026-04-02 01:31:20.599016 | orchestrator | 64 bytes from 192.168.112.100: icmp_seq=3 ttl=63 time=2.14 ms 2026-04-02 01:31:20.599198 | orchestrator | 2026-04-02 01:31:20.599212 | orchestrator | --- 192.168.112.100 ping statistics --- 2026-04-02 01:31:20.599221 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-02 01:31:20.599228 | orchestrator | rtt min/avg/max/mdev = 2.143/4.862/9.901/3.566 ms 2026-04-02 01:31:20.599424 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-02 01:31:20.599441 | orchestrator | + ping -c3 192.168.112.116 2026-04-02 01:31:20.613639 | orchestrator | PING 192.168.112.116 (192.168.112.116) 56(84) bytes of data. 2026-04-02 01:31:20.613732 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=1 ttl=63 time=9.35 ms 2026-04-02 01:31:21.608008 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=2 ttl=63 time=2.21 ms 2026-04-02 01:31:22.609381 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=3 ttl=63 time=1.70 ms 2026-04-02 01:31:22.609473 | orchestrator | 2026-04-02 01:31:22.609484 | orchestrator | --- 192.168.112.116 ping statistics --- 2026-04-02 01:31:22.609492 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-02 01:31:22.609499 | orchestrator | rtt min/avg/max/mdev = 1.702/4.419/9.347/3.490 ms 2026-04-02 01:31:22.610180 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-02 01:31:22.610205 | orchestrator | + ping -c3 192.168.112.103 2026-04-02 01:31:22.620691 | orchestrator | PING 192.168.112.103 (192.168.112.103) 56(84) bytes of data. 2026-04-02 01:31:22.620783 | orchestrator | 64 bytes from 192.168.112.103: icmp_seq=1 ttl=63 time=6.68 ms 2026-04-02 01:31:23.617577 | orchestrator | 64 bytes from 192.168.112.103: icmp_seq=2 ttl=63 time=1.27 ms 2026-04-02 01:31:24.619851 | orchestrator | 64 bytes from 192.168.112.103: icmp_seq=3 ttl=63 time=1.81 ms 2026-04-02 01:31:24.619968 | orchestrator | 2026-04-02 01:31:24.619983 | orchestrator | --- 192.168.112.103 ping statistics --- 2026-04-02 01:31:24.619993 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-04-02 01:31:24.620001 | orchestrator | rtt min/avg/max/mdev = 1.274/3.253/6.679/2.432 ms 2026-04-02 01:31:24.620658 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-02 01:31:24.621329 | orchestrator | + ping -c3 192.168.112.133 2026-04-02 01:31:24.632915 | orchestrator | PING 192.168.112.133 (192.168.112.133) 56(84) bytes of data. 2026-04-02 01:31:24.633008 | orchestrator | 64 bytes from 192.168.112.133: icmp_seq=1 ttl=63 time=7.74 ms 2026-04-02 01:31:25.629179 | orchestrator | 64 bytes from 192.168.112.133: icmp_seq=2 ttl=63 time=2.03 ms 2026-04-02 01:31:26.630362 | orchestrator | 64 bytes from 192.168.112.133: icmp_seq=3 ttl=63 time=1.66 ms 2026-04-02 01:31:26.630438 | orchestrator | 2026-04-02 01:31:26.630476 | orchestrator | --- 192.168.112.133 ping statistics --- 2026-04-02 01:31:26.630482 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-02 01:31:26.630486 | orchestrator | rtt min/avg/max/mdev = 1.655/3.810/7.742/2.784 ms 2026-04-02 01:31:26.630535 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-02 01:31:26.630542 | orchestrator | + ping -c3 192.168.112.156 2026-04-02 01:31:26.640998 | orchestrator | PING 192.168.112.156 (192.168.112.156) 56(84) bytes of data. 2026-04-02 01:31:26.641102 | orchestrator | 64 bytes from 192.168.112.156: icmp_seq=1 ttl=63 time=5.53 ms 2026-04-02 01:31:27.638763 | orchestrator | 64 bytes from 192.168.112.156: icmp_seq=2 ttl=63 time=1.66 ms 2026-04-02 01:31:28.641555 | orchestrator | 64 bytes from 192.168.112.156: icmp_seq=3 ttl=63 time=1.25 ms 2026-04-02 01:31:28.641620 | orchestrator | 2026-04-02 01:31:28.641632 | orchestrator | --- 192.168.112.156 ping statistics --- 2026-04-02 01:31:28.641654 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-04-02 01:31:28.641661 | orchestrator | rtt min/avg/max/mdev = 1.250/2.814/5.532/1.929 ms 2026-04-02 01:31:28.756943 | orchestrator | ok: Runtime: 0:17:40.258427 2026-04-02 01:31:28.819810 | 2026-04-02 01:31:28.819952 | TASK [Run tempest] 2026-04-02 01:31:29.530187 | orchestrator | + set -e 2026-04-02 01:31:29.530344 | orchestrator | + source /opt/manager-vars.sh 2026-04-02 01:31:29.530362 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-02 01:31:29.530370 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-02 01:31:29.530378 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-02 01:31:29.530387 | orchestrator | ++ CEPH_VERSION=reef 2026-04-02 01:31:29.530395 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-02 01:31:29.530424 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-02 01:31:29.530438 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-02 01:31:29.530449 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-02 01:31:29.530457 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-02 01:31:29.530467 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-02 01:31:29.530474 | orchestrator | ++ export ARA=false 2026-04-02 01:31:29.530480 | orchestrator | ++ ARA=false 2026-04-02 01:31:29.530488 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-02 01:31:29.530495 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-02 01:31:29.530501 | orchestrator | ++ export TEMPEST=true 2026-04-02 01:31:29.530511 | orchestrator | ++ TEMPEST=true 2026-04-02 01:31:29.530517 | orchestrator | ++ export IS_ZUUL=true 2026-04-02 01:31:29.530524 | orchestrator | ++ IS_ZUUL=true 2026-04-02 01:31:29.530532 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.251 2026-04-02 01:31:29.530540 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.251 2026-04-02 01:31:29.530545 | orchestrator | ++ export EXTERNAL_API=false 2026-04-02 01:31:29.530549 | orchestrator | ++ EXTERNAL_API=false 2026-04-02 01:31:29.530553 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-02 01:31:29.530557 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-02 01:31:29.530560 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-02 01:31:29.530564 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-02 01:31:29.530568 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-02 01:31:29.530572 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-02 01:31:29.532241 | orchestrator | 2026-04-02 01:31:29.532321 | orchestrator | # Tempest 2026-04-02 01:31:29.532329 | orchestrator | 2026-04-02 01:31:29.532336 | orchestrator | + echo 2026-04-02 01:31:29.532340 | orchestrator | + echo '# Tempest' 2026-04-02 01:31:29.532347 | orchestrator | + echo 2026-04-02 01:31:29.532351 | orchestrator | + [[ ! -e /opt/tempest ]] 2026-04-02 01:31:29.532356 | orchestrator | + osism apply tempest --skip-tags run-tempest 2026-04-02 01:31:40.939082 | orchestrator | 2026-04-02 01:31:40 | INFO  | Prepare task for execution of tempest. 2026-04-02 01:31:41.016058 | orchestrator | 2026-04-02 01:31:41 | INFO  | Task 56a19a76-1dfa-4068-b8f5-1ccbd63c6186 (tempest) was prepared for execution. 2026-04-02 01:31:41.016170 | orchestrator | 2026-04-02 01:31:41 | INFO  | It takes a moment until task 56a19a76-1dfa-4068-b8f5-1ccbd63c6186 (tempest) has been started and output is visible here. 2026-04-02 01:32:57.617375 | orchestrator | 2026-04-02 01:32:57.617481 | orchestrator | PLAY [Run tempest] ************************************************************* 2026-04-02 01:32:57.617492 | orchestrator | 2026-04-02 01:32:57.617498 | orchestrator | TASK [osism.validations.tempest : Create tempest workdir] ********************** 2026-04-02 01:32:57.617517 | orchestrator | Thursday 02 April 2026 01:31:44 +0000 (0:00:00.331) 0:00:00.331 ******** 2026-04-02 01:32:57.617524 | orchestrator | changed: [testbed-manager] 2026-04-02 01:32:57.617533 | orchestrator | 2026-04-02 01:32:57.617540 | orchestrator | TASK [osism.validations.tempest : Copy tempest wrapper script] ***************** 2026-04-02 01:32:57.617546 | orchestrator | Thursday 02 April 2026 01:31:45 +0000 (0:00:01.015) 0:00:01.347 ******** 2026-04-02 01:32:57.617554 | orchestrator | changed: [testbed-manager] 2026-04-02 01:32:57.617560 | orchestrator | 2026-04-02 01:32:57.617567 | orchestrator | TASK [osism.validations.tempest : Check for existing tempest initialisation] *** 2026-04-02 01:32:57.617574 | orchestrator | Thursday 02 April 2026 01:31:46 +0000 (0:00:01.222) 0:00:02.569 ******** 2026-04-02 01:32:57.617578 | orchestrator | ok: [testbed-manager] 2026-04-02 01:32:57.617583 | orchestrator | 2026-04-02 01:32:57.617587 | orchestrator | TASK [osism.validations.tempest : Init tempest] ******************************** 2026-04-02 01:32:57.617591 | orchestrator | Thursday 02 April 2026 01:31:46 +0000 (0:00:00.406) 0:00:02.975 ******** 2026-04-02 01:32:57.617595 | orchestrator | changed: [testbed-manager] 2026-04-02 01:32:57.617599 | orchestrator | 2026-04-02 01:32:57.617603 | orchestrator | TASK [osism.validations.tempest : Resolve image IDs] *************************** 2026-04-02 01:32:57.617608 | orchestrator | Thursday 02 April 2026 01:32:07 +0000 (0:00:20.887) 0:00:23.863 ******** 2026-04-02 01:32:57.617636 | orchestrator | ok: [testbed-manager -> localhost] => (item=Cirros 0.6.3) 2026-04-02 01:32:57.617640 | orchestrator | ok: [testbed-manager -> localhost] => (item=Cirros 0.6.2) 2026-04-02 01:32:57.617648 | orchestrator | 2026-04-02 01:32:57.617652 | orchestrator | TASK [osism.validations.tempest : Assert images have been resolved] ************ 2026-04-02 01:32:57.617656 | orchestrator | Thursday 02 April 2026 01:32:16 +0000 (0:00:08.962) 0:00:32.825 ******** 2026-04-02 01:32:57.617660 | orchestrator | ok: [testbed-manager] => { 2026-04-02 01:32:57.617664 | orchestrator |  "changed": false, 2026-04-02 01:32:57.617667 | orchestrator |  "msg": "All assertions passed" 2026-04-02 01:32:57.617671 | orchestrator | } 2026-04-02 01:32:57.617675 | orchestrator | 2026-04-02 01:32:57.617679 | orchestrator | TASK [osism.validations.tempest : Get auth token] ****************************** 2026-04-02 01:32:57.617683 | orchestrator | Thursday 02 April 2026 01:32:16 +0000 (0:00:00.177) 0:00:33.003 ******** 2026-04-02 01:32:57.617687 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-02 01:32:57.617691 | orchestrator | 2026-04-02 01:32:57.617695 | orchestrator | TASK [osism.validations.tempest : Get endpoint catalog] ************************ 2026-04-02 01:32:57.617698 | orchestrator | Thursday 02 April 2026 01:32:20 +0000 (0:00:03.597) 0:00:36.600 ******** 2026-04-02 01:32:57.617702 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-02 01:32:57.617706 | orchestrator | 2026-04-02 01:32:57.617710 | orchestrator | TASK [osism.validations.tempest : Get service catalog] ************************* 2026-04-02 01:32:57.617721 | orchestrator | Thursday 02 April 2026 01:32:22 +0000 (0:00:01.830) 0:00:38.430 ******** 2026-04-02 01:32:57.617725 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-02 01:32:57.617729 | orchestrator | 2026-04-02 01:32:57.617733 | orchestrator | TASK [osism.validations.tempest : Register img_file name] ********************** 2026-04-02 01:32:57.617736 | orchestrator | Thursday 02 April 2026 01:32:26 +0000 (0:00:03.705) 0:00:42.135 ******** 2026-04-02 01:32:57.617740 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-02 01:32:57.617744 | orchestrator | 2026-04-02 01:32:57.617748 | orchestrator | TASK [osism.validations.tempest : Download img_file from image_ref] ************ 2026-04-02 01:32:57.617752 | orchestrator | Thursday 02 April 2026 01:32:26 +0000 (0:00:00.183) 0:00:42.319 ******** 2026-04-02 01:32:57.617755 | orchestrator | changed: [testbed-manager] 2026-04-02 01:32:57.617760 | orchestrator | 2026-04-02 01:32:57.617763 | orchestrator | TASK [osism.validations.tempest : Install qemu-utils package] ****************** 2026-04-02 01:32:57.617770 | orchestrator | Thursday 02 April 2026 01:32:29 +0000 (0:00:02.875) 0:00:45.194 ******** 2026-04-02 01:32:57.617777 | orchestrator | changed: [testbed-manager] 2026-04-02 01:32:57.617783 | orchestrator | 2026-04-02 01:32:57.617793 | orchestrator | TASK [osism.validations.tempest : Convert img_file to qcow2 format] ************ 2026-04-02 01:32:57.617800 | orchestrator | Thursday 02 April 2026 01:32:37 +0000 (0:00:08.684) 0:00:53.879 ******** 2026-04-02 01:32:57.617806 | orchestrator | changed: [testbed-manager] 2026-04-02 01:32:57.617813 | orchestrator | 2026-04-02 01:32:57.617819 | orchestrator | TASK [osism.validations.tempest : Get network API extensions] ****************** 2026-04-02 01:32:57.617826 | orchestrator | Thursday 02 April 2026 01:32:38 +0000 (0:00:00.681) 0:00:54.560 ******** 2026-04-02 01:32:57.617832 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-02 01:32:57.617838 | orchestrator | 2026-04-02 01:32:57.617845 | orchestrator | TASK [osism.validations.tempest : Revoke token] ******************************** 2026-04-02 01:32:57.617850 | orchestrator | Thursday 02 April 2026 01:32:40 +0000 (0:00:01.521) 0:00:56.082 ******** 2026-04-02 01:32:57.617856 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-02 01:32:57.617861 | orchestrator | 2026-04-02 01:32:57.617868 | orchestrator | TASK [osism.validations.tempest : Set fact for config option api_extensions] *** 2026-04-02 01:32:57.617873 | orchestrator | Thursday 02 April 2026 01:32:41 +0000 (0:00:01.578) 0:00:57.660 ******** 2026-04-02 01:32:57.617880 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-02 01:32:57.617886 | orchestrator | 2026-04-02 01:32:57.617892 | orchestrator | TASK [osism.validations.tempest : Set fact for config option img_file] ********* 2026-04-02 01:32:57.617906 | orchestrator | Thursday 02 April 2026 01:32:41 +0000 (0:00:00.193) 0:00:57.853 ******** 2026-04-02 01:32:57.617913 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-02 01:32:57.617919 | orchestrator | 2026-04-02 01:32:57.617933 | orchestrator | TASK [osism.validations.tempest : Resolve floating network ID] ***************** 2026-04-02 01:32:57.617940 | orchestrator | Thursday 02 April 2026 01:32:42 +0000 (0:00:00.343) 0:00:58.197 ******** 2026-04-02 01:32:57.617947 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-02 01:32:57.617953 | orchestrator | 2026-04-02 01:32:57.617960 | orchestrator | TASK [osism.validations.tempest : Assert floating network id has been resolved] *** 2026-04-02 01:32:57.617986 | orchestrator | Thursday 02 April 2026 01:32:46 +0000 (0:00:03.928) 0:01:02.125 ******** 2026-04-02 01:32:57.617993 | orchestrator | ok: [testbed-manager -> localhost] => { 2026-04-02 01:32:57.618000 | orchestrator |  "changed": false, 2026-04-02 01:32:57.618006 | orchestrator |  "msg": "All assertions passed" 2026-04-02 01:32:57.618061 | orchestrator | } 2026-04-02 01:32:57.618068 | orchestrator | 2026-04-02 01:32:57.618075 | orchestrator | TASK [osism.validations.tempest : Resolve flavor IDs] ************************** 2026-04-02 01:32:57.618082 | orchestrator | Thursday 02 April 2026 01:32:46 +0000 (0:00:00.179) 0:01:02.305 ******** 2026-04-02 01:32:57.618089 | orchestrator | skipping: [testbed-manager] => (item={'name': 'tempest-1', 'vcpus': 1, 'ram': 1024, 'disk': 1})  2026-04-02 01:32:57.618097 | orchestrator | skipping: [testbed-manager] => (item={'name': 'tempest-2', 'vcpus': 2, 'ram': 2048, 'disk': 2})  2026-04-02 01:32:57.618103 | orchestrator | skipping: [testbed-manager] 2026-04-02 01:32:57.618110 | orchestrator | 2026-04-02 01:32:57.618116 | orchestrator | TASK [osism.validations.tempest : Assert flavors have been resolved] *********** 2026-04-02 01:32:57.618123 | orchestrator | Thursday 02 April 2026 01:32:46 +0000 (0:00:00.209) 0:01:02.514 ******** 2026-04-02 01:32:57.618130 | orchestrator | skipping: [testbed-manager] 2026-04-02 01:32:57.618136 | orchestrator | 2026-04-02 01:32:57.618142 | orchestrator | TASK [osism.validations.tempest : Get stats of exclude list] ******************* 2026-04-02 01:32:57.618149 | orchestrator | Thursday 02 April 2026 01:32:46 +0000 (0:00:00.166) 0:01:02.681 ******** 2026-04-02 01:32:57.618155 | orchestrator | ok: [testbed-manager] 2026-04-02 01:32:57.618161 | orchestrator | 2026-04-02 01:32:57.618168 | orchestrator | TASK [osism.validations.tempest : Copy exclude list] *************************** 2026-04-02 01:32:57.618174 | orchestrator | Thursday 02 April 2026 01:32:47 +0000 (0:00:00.468) 0:01:03.149 ******** 2026-04-02 01:32:57.618180 | orchestrator | changed: [testbed-manager] 2026-04-02 01:32:57.618186 | orchestrator | 2026-04-02 01:32:57.618193 | orchestrator | TASK [osism.validations.tempest : Get stats of include list] ******************* 2026-04-02 01:32:57.618199 | orchestrator | Thursday 02 April 2026 01:32:48 +0000 (0:00:00.897) 0:01:04.047 ******** 2026-04-02 01:32:57.618205 | orchestrator | ok: [testbed-manager] 2026-04-02 01:32:57.618212 | orchestrator | 2026-04-02 01:32:57.618218 | orchestrator | TASK [osism.validations.tempest : Copy include list] *************************** 2026-04-02 01:32:57.618225 | orchestrator | Thursday 02 April 2026 01:32:48 +0000 (0:00:00.439) 0:01:04.487 ******** 2026-04-02 01:32:57.618334 | orchestrator | skipping: [testbed-manager] 2026-04-02 01:32:57.618342 | orchestrator | 2026-04-02 01:32:57.618348 | orchestrator | TASK [osism.validations.tempest : Create tempest flavors] ********************** 2026-04-02 01:32:57.618353 | orchestrator | Thursday 02 April 2026 01:32:48 +0000 (0:00:00.300) 0:01:04.788 ******** 2026-04-02 01:32:57.618360 | orchestrator | changed: [testbed-manager -> localhost] => (item={'name': 'tempest-1', 'vcpus': 1, 'ram': 1024, 'disk': 1}) 2026-04-02 01:32:57.618366 | orchestrator | changed: [testbed-manager -> localhost] => (item={'name': 'tempest-2', 'vcpus': 2, 'ram': 2048, 'disk': 2}) 2026-04-02 01:32:57.618372 | orchestrator | 2026-04-02 01:32:57.618379 | orchestrator | TASK [osism.validations.tempest : Copy tempest.conf file] ********************** 2026-04-02 01:32:57.618385 | orchestrator | Thursday 02 April 2026 01:32:56 +0000 (0:00:07.799) 0:01:12.587 ******** 2026-04-02 01:32:57.618391 | orchestrator | changed: [testbed-manager] 2026-04-02 01:32:57.618405 | orchestrator | 2026-04-02 01:32:57.618411 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-02 01:32:57.618418 | orchestrator | testbed-manager : ok=24  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-02 01:32:57.618425 | orchestrator | 2026-04-02 01:32:57.618431 | orchestrator | 2026-04-02 01:32:57.618437 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-02 01:32:57.618443 | orchestrator | Thursday 02 April 2026 01:32:57 +0000 (0:00:01.040) 0:01:13.628 ******** 2026-04-02 01:32:57.618449 | orchestrator | =============================================================================== 2026-04-02 01:32:57.618455 | orchestrator | osism.validations.tempest : Init tempest ------------------------------- 20.89s 2026-04-02 01:32:57.618461 | orchestrator | osism.validations.tempest : Resolve image IDs --------------------------- 8.96s 2026-04-02 01:32:57.618467 | orchestrator | osism.validations.tempest : Install qemu-utils package ------------------ 8.68s 2026-04-02 01:32:57.618473 | orchestrator | osism.validations.tempest : Create tempest flavors ---------------------- 7.80s 2026-04-02 01:32:57.618485 | orchestrator | osism.validations.tempest : Resolve floating network ID ----------------- 3.93s 2026-04-02 01:32:57.618492 | orchestrator | osism.validations.tempest : Get service catalog ------------------------- 3.71s 2026-04-02 01:32:57.618498 | orchestrator | osism.validations.tempest : Get auth token ------------------------------ 3.60s 2026-04-02 01:32:57.618504 | orchestrator | osism.validations.tempest : Download img_file from image_ref ------------ 2.88s 2026-04-02 01:32:57.618510 | orchestrator | osism.validations.tempest : Get endpoint catalog ------------------------ 1.83s 2026-04-02 01:32:57.618516 | orchestrator | osism.validations.tempest : Revoke token -------------------------------- 1.58s 2026-04-02 01:32:57.618522 | orchestrator | osism.validations.tempest : Get network API extensions ------------------ 1.52s 2026-04-02 01:32:57.618528 | orchestrator | osism.validations.tempest : Copy tempest wrapper script ----------------- 1.22s 2026-04-02 01:32:57.618534 | orchestrator | osism.validations.tempest : Copy tempest.conf file ---------------------- 1.04s 2026-04-02 01:32:57.618540 | orchestrator | osism.validations.tempest : Create tempest workdir ---------------------- 1.02s 2026-04-02 01:32:57.618546 | orchestrator | osism.validations.tempest : Copy exclude list --------------------------- 0.90s 2026-04-02 01:32:57.618553 | orchestrator | osism.validations.tempest : Convert img_file to qcow2 format ------------ 0.68s 2026-04-02 01:32:57.618559 | orchestrator | osism.validations.tempest : Get stats of exclude list ------------------- 0.47s 2026-04-02 01:32:57.618573 | orchestrator | osism.validations.tempest : Get stats of include list ------------------- 0.44s 2026-04-02 01:32:57.855604 | orchestrator | osism.validations.tempest : Check for existing tempest initialisation --- 0.41s 2026-04-02 01:32:57.855682 | orchestrator | osism.validations.tempest : Set fact for config option img_file --------- 0.34s 2026-04-02 01:32:58.047690 | orchestrator | + sed -i '/log_dir =/d' /opt/tempest/etc/tempest.conf 2026-04-02 01:32:58.050236 | orchestrator | + sed -i '/log_file =/d' /opt/tempest/etc/tempest.conf 2026-04-02 01:32:58.054204 | orchestrator | 2026-04-02 01:32:58.054347 | orchestrator | ## IDENTITY (API) 2026-04-02 01:32:58.054361 | orchestrator | 2026-04-02 01:32:58.054367 | orchestrator | + [[ false == \t\r\u\e ]] 2026-04-02 01:32:58.054374 | orchestrator | + echo 2026-04-02 01:32:58.054381 | orchestrator | + echo '## IDENTITY (API)' 2026-04-02 01:32:58.054388 | orchestrator | + echo 2026-04-02 01:32:58.054394 | orchestrator | + _tempest tempest.api.identity.v3 2026-04-02 01:32:58.054403 | orchestrator | + local regex=tempest.api.identity.v3 2026-04-02 01:32:58.054517 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.identity.v3 --concurrency 16 2026-04-02 01:32:58.055616 | orchestrator | ++ date +%Y%m%d-%H%M 2026-04-02 01:32:58.056987 | orchestrator | + tee -a /opt/tempest/20260402-0132.log 2026-04-02 01:33:01.799002 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.identity.v3 --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-04-02 01:33:01.799121 | orchestrator | Did you mean one of these? 2026-04-02 01:33:01.799131 | orchestrator | help 2026-04-02 01:33:01.799136 | orchestrator | init 2026-04-02 01:33:02.154465 | orchestrator | 2026-04-02 01:33:02.154561 | orchestrator | ## IMAGE (API) 2026-04-02 01:33:02.154571 | orchestrator | 2026-04-02 01:33:02.154577 | orchestrator | + echo 2026-04-02 01:33:02.154584 | orchestrator | + echo '## IMAGE (API)' 2026-04-02 01:33:02.154591 | orchestrator | + echo 2026-04-02 01:33:02.154598 | orchestrator | + _tempest tempest.api.image.v2 2026-04-02 01:33:02.154605 | orchestrator | + local regex=tempest.api.image.v2 2026-04-02 01:33:02.155120 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.image.v2 --concurrency 16 2026-04-02 01:33:02.156961 | orchestrator | ++ date +%Y%m%d-%H%M 2026-04-02 01:33:02.160984 | orchestrator | + tee -a /opt/tempest/20260402-0133.log 2026-04-02 01:33:05.617194 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.image.v2 --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-04-02 01:33:05.617342 | orchestrator | Did you mean one of these? 2026-04-02 01:33:05.617360 | orchestrator | help 2026-04-02 01:33:05.617369 | orchestrator | init 2026-04-02 01:33:05.979734 | orchestrator | 2026-04-02 01:33:05.979853 | orchestrator | ## NETWORK (API) 2026-04-02 01:33:05.979865 | orchestrator | 2026-04-02 01:33:05.979871 | orchestrator | + echo 2026-04-02 01:33:05.979877 | orchestrator | + echo '## NETWORK (API)' 2026-04-02 01:33:05.979884 | orchestrator | + echo 2026-04-02 01:33:05.979890 | orchestrator | + _tempest tempest.api.network 2026-04-02 01:33:05.979896 | orchestrator | + local regex=tempest.api.network 2026-04-02 01:33:05.979905 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.network --concurrency 16 2026-04-02 01:33:05.985521 | orchestrator | ++ date +%Y%m%d-%H%M 2026-04-02 01:33:05.991974 | orchestrator | + tee -a /opt/tempest/20260402-0133.log 2026-04-02 01:33:09.533789 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.network --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-04-02 01:33:09.533885 | orchestrator | Did you mean one of these? 2026-04-02 01:33:09.533899 | orchestrator | help 2026-04-02 01:33:09.533907 | orchestrator | init 2026-04-02 01:33:09.899444 | orchestrator | 2026-04-02 01:33:09.899516 | orchestrator | ## VOLUME (API) 2026-04-02 01:33:09.899522 | orchestrator | 2026-04-02 01:33:09.899527 | orchestrator | + echo 2026-04-02 01:33:09.899531 | orchestrator | + echo '## VOLUME (API)' 2026-04-02 01:33:09.899536 | orchestrator | + echo 2026-04-02 01:33:09.899540 | orchestrator | + _tempest tempest.api.volume 2026-04-02 01:33:09.899544 | orchestrator | + local regex=tempest.api.volume 2026-04-02 01:33:09.899572 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.volume --concurrency 16 2026-04-02 01:33:09.899656 | orchestrator | ++ date +%Y%m%d-%H%M 2026-04-02 01:33:09.901454 | orchestrator | + tee -a /opt/tempest/20260402-0133.log 2026-04-02 01:33:13.448068 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.volume --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-04-02 01:33:13.448253 | orchestrator | Did you mean one of these? 2026-04-02 01:33:13.448267 | orchestrator | help 2026-04-02 01:33:13.448274 | orchestrator | init 2026-04-02 01:33:13.809255 | orchestrator | + echo 2026-04-02 01:33:13.810211 | orchestrator | 2026-04-02 01:33:13.810256 | orchestrator | ## COMPUTE (API) 2026-04-02 01:33:13.810262 | orchestrator | 2026-04-02 01:33:13.810267 | orchestrator | + echo '## COMPUTE (API)' 2026-04-02 01:33:13.810272 | orchestrator | + echo 2026-04-02 01:33:13.810277 | orchestrator | + _tempest tempest.api.compute 2026-04-02 01:33:13.810349 | orchestrator | + local regex=tempest.api.compute 2026-04-02 01:33:13.810357 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.compute --concurrency 16 2026-04-02 01:33:13.811638 | orchestrator | ++ date +%Y%m%d-%H%M 2026-04-02 01:33:13.813664 | orchestrator | + tee -a /opt/tempest/20260402-0133.log 2026-04-02 01:33:17.428485 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.compute --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-04-02 01:33:17.428552 | orchestrator | Did you mean one of these? 2026-04-02 01:33:17.428563 | orchestrator | help 2026-04-02 01:33:17.428571 | orchestrator | init 2026-04-02 01:33:17.788417 | orchestrator | 2026-04-02 01:33:17.788482 | orchestrator | + echo 2026-04-02 01:33:17.789228 | orchestrator | ## DNS (API) 2026-04-02 01:33:17.789240 | orchestrator | 2026-04-02 01:33:17.789245 | orchestrator | + echo '## DNS (API)' 2026-04-02 01:33:17.789252 | orchestrator | + echo 2026-04-02 01:33:17.789258 | orchestrator | + _tempest designate_tempest_plugin.tests.api.v2 2026-04-02 01:33:17.789263 | orchestrator | + local regex=designate_tempest_plugin.tests.api.v2 2026-04-02 01:33:17.789404 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex designate_tempest_plugin.tests.api.v2 --concurrency 16 2026-04-02 01:33:17.790808 | orchestrator | ++ date +%Y%m%d-%H%M 2026-04-02 01:33:17.792333 | orchestrator | + tee -a /opt/tempest/20260402-0133.log 2026-04-02 01:33:21.367656 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex designate_tempest_plugin.tests.api.v2 --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-04-02 01:33:21.367748 | orchestrator | Did you mean one of these? 2026-04-02 01:33:21.367760 | orchestrator | help 2026-04-02 01:33:21.367771 | orchestrator | init 2026-04-02 01:33:21.750733 | orchestrator | 2026-04-02 01:33:21.750815 | orchestrator | ## OBJECT-STORE (API) 2026-04-02 01:33:21.750845 | orchestrator | 2026-04-02 01:33:21.750859 | orchestrator | + echo 2026-04-02 01:33:21.750866 | orchestrator | + echo '## OBJECT-STORE (API)' 2026-04-02 01:33:21.750873 | orchestrator | + echo 2026-04-02 01:33:21.750880 | orchestrator | + _tempest tempest.api.object_storage 2026-04-02 01:33:21.750888 | orchestrator | + local regex=tempest.api.object_storage 2026-04-02 01:33:21.751771 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.object_storage --concurrency 16 2026-04-02 01:33:21.753827 | orchestrator | ++ date +%Y%m%d-%H%M 2026-04-02 01:33:21.757578 | orchestrator | + tee -a /opt/tempest/20260402-0133.log 2026-04-02 01:33:25.320631 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.object_storage --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-04-02 01:33:25.320695 | orchestrator | Did you mean one of these? 2026-04-02 01:33:25.320702 | orchestrator | help 2026-04-02 01:33:25.320707 | orchestrator | init 2026-04-02 01:33:25.932301 | orchestrator | ok: Runtime: 0:01:56.532654 2026-04-02 01:33:25.952043 | 2026-04-02 01:33:25.952224 | TASK [Check prometheus alert status] 2026-04-02 01:33:26.491357 | orchestrator | skipping: Conditional result was False 2026-04-02 01:33:26.494915 | 2026-04-02 01:33:26.495101 | PLAY RECAP 2026-04-02 01:33:26.495282 | orchestrator | ok: 25 changed: 12 unreachable: 0 failed: 0 skipped: 4 rescued: 0 ignored: 0 2026-04-02 01:33:26.495374 | 2026-04-02 01:33:26.784595 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2026-04-02 01:33:26.787494 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-04-02 01:33:27.530331 | 2026-04-02 01:33:27.530491 | PLAY [Post output play] 2026-04-02 01:33:27.546715 | 2026-04-02 01:33:27.546863 | LOOP [stage-output : Register sources] 2026-04-02 01:33:27.616387 | 2026-04-02 01:33:27.616710 | TASK [stage-output : Check sudo] 2026-04-02 01:33:28.441818 | orchestrator | sudo: a password is required 2026-04-02 01:33:28.655622 | orchestrator | ok: Runtime: 0:00:00.009260 2026-04-02 01:33:28.672168 | 2026-04-02 01:33:28.672361 | LOOP [stage-output : Set source and destination for files and folders] 2026-04-02 01:33:28.710572 | 2026-04-02 01:33:28.710880 | TASK [stage-output : Build a list of source, dest dictionaries] 2026-04-02 01:33:28.785550 | orchestrator | ok 2026-04-02 01:33:28.793828 | 2026-04-02 01:33:28.793980 | LOOP [stage-output : Ensure target folders exist] 2026-04-02 01:33:29.272178 | orchestrator | ok: "docs" 2026-04-02 01:33:29.272520 | 2026-04-02 01:33:29.537046 | orchestrator | ok: "artifacts" 2026-04-02 01:33:29.799767 | orchestrator | ok: "logs" 2026-04-02 01:33:29.813776 | 2026-04-02 01:33:29.813915 | LOOP [stage-output : Copy files and folders to staging folder] 2026-04-02 01:33:29.846345 | 2026-04-02 01:33:29.846556 | TASK [stage-output : Make all log files readable] 2026-04-02 01:33:30.137334 | orchestrator | ok 2026-04-02 01:33:30.147504 | 2026-04-02 01:33:30.147667 | TASK [stage-output : Rename log files that match extensions_to_txt] 2026-04-02 01:33:30.183995 | orchestrator | skipping: Conditional result was False 2026-04-02 01:33:30.205464 | 2026-04-02 01:33:30.205665 | TASK [stage-output : Discover log files for compression] 2026-04-02 01:33:30.231133 | orchestrator | skipping: Conditional result was False 2026-04-02 01:33:30.248553 | 2026-04-02 01:33:30.248766 | LOOP [stage-output : Archive everything from logs] 2026-04-02 01:33:30.302546 | 2026-04-02 01:33:30.302742 | PLAY [Post cleanup play] 2026-04-02 01:33:30.311677 | 2026-04-02 01:33:30.311791 | TASK [Set cloud fact (Zuul deployment)] 2026-04-02 01:33:30.371722 | orchestrator | ok 2026-04-02 01:33:30.385531 | 2026-04-02 01:33:30.385681 | TASK [Set cloud fact (local deployment)] 2026-04-02 01:33:30.422367 | orchestrator | skipping: Conditional result was False 2026-04-02 01:33:30.439757 | 2026-04-02 01:33:30.439914 | TASK [Clean the cloud environment] 2026-04-02 01:33:31.146968 | orchestrator | 2026-04-02 01:33:31 - clean up servers 2026-04-02 01:33:31.896521 | orchestrator | 2026-04-02 01:33:31 - testbed-manager 2026-04-02 01:33:31.983824 | orchestrator | 2026-04-02 01:33:31 - testbed-node-5 2026-04-02 01:33:32.063770 | orchestrator | 2026-04-02 01:33:32 - testbed-node-0 2026-04-02 01:33:32.153144 | orchestrator | 2026-04-02 01:33:32 - testbed-node-1 2026-04-02 01:33:32.244640 | orchestrator | 2026-04-02 01:33:32 - testbed-node-3 2026-04-02 01:33:32.331381 | orchestrator | 2026-04-02 01:33:32 - testbed-node-4 2026-04-02 01:33:32.416264 | orchestrator | 2026-04-02 01:33:32 - testbed-node-2 2026-04-02 01:33:32.504122 | orchestrator | 2026-04-02 01:33:32 - clean up keypairs 2026-04-02 01:33:32.520084 | orchestrator | 2026-04-02 01:33:32 - testbed 2026-04-02 01:33:32.542109 | orchestrator | 2026-04-02 01:33:32 - wait for servers to be gone 2026-04-02 01:33:45.467302 | orchestrator | 2026-04-02 01:33:45 - clean up ports 2026-04-02 01:33:45.676088 | orchestrator | 2026-04-02 01:33:45 - 0d4bee37-e559-4f33-a779-3aa2b25d6a21 2026-04-02 01:33:45.919547 | orchestrator | 2026-04-02 01:33:45 - 11a1a2af-31de-4fd0-845d-3ad2e87cf17a 2026-04-02 01:33:46.998826 | orchestrator | 2026-04-02 01:33:46 - 15b43896-05c9-4b0b-89c9-6a014ef6192e 2026-04-02 01:33:47.275254 | orchestrator | 2026-04-02 01:33:47 - 73144b9d-8a3d-478a-998e-4a7552fd086a 2026-04-02 01:33:47.478666 | orchestrator | 2026-04-02 01:33:47 - 77abafef-e48c-48d0-b36a-74bd64ef0dec 2026-04-02 01:33:47.828984 | orchestrator | 2026-04-02 01:33:47 - 97a7aa63-bea3-4714-b061-6d07cb7688db 2026-04-02 01:33:48.036976 | orchestrator | 2026-04-02 01:33:48 - e747d327-3862-4b88-9333-e47092a04e50 2026-04-02 01:33:48.269433 | orchestrator | 2026-04-02 01:33:48 - clean up volumes 2026-04-02 01:33:48.394235 | orchestrator | 2026-04-02 01:33:48 - testbed-volume-3-node-base 2026-04-02 01:33:48.440920 | orchestrator | 2026-04-02 01:33:48 - testbed-volume-0-node-base 2026-04-02 01:33:48.486837 | orchestrator | 2026-04-02 01:33:48 - testbed-volume-2-node-base 2026-04-02 01:33:48.528577 | orchestrator | 2026-04-02 01:33:48 - testbed-volume-1-node-base 2026-04-02 01:33:48.570953 | orchestrator | 2026-04-02 01:33:48 - testbed-volume-5-node-base 2026-04-02 01:33:48.609326 | orchestrator | 2026-04-02 01:33:48 - testbed-volume-4-node-base 2026-04-02 01:33:48.650988 | orchestrator | 2026-04-02 01:33:48 - testbed-volume-manager-base 2026-04-02 01:33:48.696175 | orchestrator | 2026-04-02 01:33:48 - testbed-volume-3-node-3 2026-04-02 01:33:48.735036 | orchestrator | 2026-04-02 01:33:48 - testbed-volume-0-node-3 2026-04-02 01:33:48.776453 | orchestrator | 2026-04-02 01:33:48 - testbed-volume-7-node-4 2026-04-02 01:33:48.816453 | orchestrator | 2026-04-02 01:33:48 - testbed-volume-1-node-4 2026-04-02 01:33:48.855170 | orchestrator | 2026-04-02 01:33:48 - testbed-volume-4-node-4 2026-04-02 01:33:48.893608 | orchestrator | 2026-04-02 01:33:48 - testbed-volume-5-node-5 2026-04-02 01:33:48.935593 | orchestrator | 2026-04-02 01:33:48 - testbed-volume-8-node-5 2026-04-02 01:33:48.975069 | orchestrator | 2026-04-02 01:33:48 - testbed-volume-6-node-3 2026-04-02 01:33:49.014927 | orchestrator | 2026-04-02 01:33:49 - testbed-volume-2-node-5 2026-04-02 01:33:49.059554 | orchestrator | 2026-04-02 01:33:49 - disconnect routers 2026-04-02 01:33:49.183020 | orchestrator | 2026-04-02 01:33:49 - testbed 2026-04-02 01:33:50.246924 | orchestrator | 2026-04-02 01:33:50 - clean up subnets 2026-04-02 01:33:50.300451 | orchestrator | 2026-04-02 01:33:50 - subnet-testbed-management 2026-04-02 01:33:50.457863 | orchestrator | 2026-04-02 01:33:50 - clean up networks 2026-04-02 01:33:51.151533 | orchestrator | 2026-04-02 01:33:51 - net-testbed-management 2026-04-02 01:33:51.459882 | orchestrator | 2026-04-02 01:33:51 - clean up security groups 2026-04-02 01:33:51.499166 | orchestrator | 2026-04-02 01:33:51 - testbed-node 2026-04-02 01:33:51.623816 | orchestrator | 2026-04-02 01:33:51 - testbed-management 2026-04-02 01:33:51.741895 | orchestrator | 2026-04-02 01:33:51 - clean up floating ips 2026-04-02 01:33:51.778403 | orchestrator | 2026-04-02 01:33:51 - 81.163.193.251 2026-04-02 01:33:52.163847 | orchestrator | 2026-04-02 01:33:52 - clean up routers 2026-04-02 01:33:52.254110 | orchestrator | 2026-04-02 01:33:52 - testbed 2026-04-02 01:33:53.496887 | orchestrator | ok: Runtime: 0:00:22.402591 2026-04-02 01:33:53.501543 | 2026-04-02 01:33:53.501710 | PLAY RECAP 2026-04-02 01:33:53.501837 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-04-02 01:33:53.501900 | 2026-04-02 01:33:53.645148 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-04-02 01:33:53.647749 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-04-02 01:33:54.391830 | 2026-04-02 01:33:54.392009 | PLAY [Cleanup play] 2026-04-02 01:33:54.409088 | 2026-04-02 01:33:54.409256 | TASK [Set cloud fact (Zuul deployment)] 2026-04-02 01:33:54.469341 | orchestrator | ok 2026-04-02 01:33:54.478632 | 2026-04-02 01:33:54.478783 | TASK [Set cloud fact (local deployment)] 2026-04-02 01:33:54.513596 | orchestrator | skipping: Conditional result was False 2026-04-02 01:33:54.531703 | 2026-04-02 01:33:54.531852 | TASK [Clean the cloud environment] 2026-04-02 01:33:55.708329 | orchestrator | 2026-04-02 01:33:55 - clean up servers 2026-04-02 01:33:56.185097 | orchestrator | 2026-04-02 01:33:56 - clean up keypairs 2026-04-02 01:33:56.200217 | orchestrator | 2026-04-02 01:33:56 - wait for servers to be gone 2026-04-02 01:33:56.242130 | orchestrator | 2026-04-02 01:33:56 - clean up ports 2026-04-02 01:33:56.347220 | orchestrator | 2026-04-02 01:33:56 - clean up volumes 2026-04-02 01:33:56.447218 | orchestrator | 2026-04-02 01:33:56 - disconnect routers 2026-04-02 01:33:56.473693 | orchestrator | 2026-04-02 01:33:56 - clean up subnets 2026-04-02 01:33:57.004891 | orchestrator | 2026-04-02 01:33:57 - clean up networks 2026-04-02 01:33:57.167707 | orchestrator | 2026-04-02 01:33:57 - clean up security groups 2026-04-02 01:33:57.197938 | orchestrator | 2026-04-02 01:33:57 - clean up floating ips 2026-04-02 01:33:57.227859 | orchestrator | 2026-04-02 01:33:57 - clean up routers 2026-04-02 01:33:57.571383 | orchestrator | ok: Runtime: 0:00:01.976053 2026-04-02 01:33:57.575203 | 2026-04-02 01:33:57.575393 | PLAY RECAP 2026-04-02 01:33:57.575517 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-04-02 01:33:57.575578 | 2026-04-02 01:33:57.703110 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-04-02 01:33:57.704249 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-04-02 01:33:58.451960 | 2026-04-02 01:33:58.452128 | PLAY [Base post-fetch] 2026-04-02 01:33:58.467602 | 2026-04-02 01:33:58.467741 | TASK [fetch-output : Set log path for multiple nodes] 2026-04-02 01:33:58.533370 | orchestrator | skipping: Conditional result was False 2026-04-02 01:33:58.540357 | 2026-04-02 01:33:58.540513 | TASK [fetch-output : Set log path for single node] 2026-04-02 01:33:58.595935 | orchestrator | ok 2026-04-02 01:33:58.604745 | 2026-04-02 01:33:58.604878 | LOOP [fetch-output : Ensure local output dirs] 2026-04-02 01:33:59.120661 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/7d3b30e5ba19432986235cf5def78ef7/work/logs" 2026-04-02 01:33:59.403932 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/7d3b30e5ba19432986235cf5def78ef7/work/artifacts" 2026-04-02 01:33:59.714257 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/7d3b30e5ba19432986235cf5def78ef7/work/docs" 2026-04-02 01:33:59.728140 | 2026-04-02 01:33:59.728297 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-04-02 01:34:00.714310 | orchestrator | changed: .d..t...... ./ 2026-04-02 01:34:00.714674 | orchestrator | changed: All items complete 2026-04-02 01:34:00.714738 | 2026-04-02 01:34:01.468327 | orchestrator | changed: .d..t...... ./ 2026-04-02 01:34:02.164586 | orchestrator | changed: .d..t...... ./ 2026-04-02 01:34:02.190598 | 2026-04-02 01:34:02.190760 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-04-02 01:34:02.229829 | orchestrator | skipping: Conditional result was False 2026-04-02 01:34:02.232321 | orchestrator | skipping: Conditional result was False 2026-04-02 01:34:02.249063 | 2026-04-02 01:34:02.249178 | PLAY RECAP 2026-04-02 01:34:02.249272 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-04-02 01:34:02.249311 | 2026-04-02 01:34:02.381834 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-04-02 01:34:02.383042 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-04-02 01:34:03.148512 | 2026-04-02 01:34:03.148685 | PLAY [Base post] 2026-04-02 01:34:03.163533 | 2026-04-02 01:34:03.163685 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-04-02 01:34:04.239934 | orchestrator | changed 2026-04-02 01:34:04.254698 | 2026-04-02 01:34:04.254902 | PLAY RECAP 2026-04-02 01:34:04.254998 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-04-02 01:34:04.255087 | 2026-04-02 01:34:04.385635 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-04-02 01:34:04.386711 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-04-02 01:34:05.174506 | 2026-04-02 01:34:05.174699 | PLAY [Base post-logs] 2026-04-02 01:34:05.185677 | 2026-04-02 01:34:05.185825 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-04-02 01:34:05.669403 | localhost | changed 2026-04-02 01:34:05.679744 | 2026-04-02 01:34:05.679894 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-04-02 01:34:05.718741 | localhost | ok 2026-04-02 01:34:05.724598 | 2026-04-02 01:34:05.724749 | TASK [Set zuul-log-path fact] 2026-04-02 01:34:05.747167 | localhost | ok 2026-04-02 01:34:05.766989 | 2026-04-02 01:34:05.767166 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-04-02 01:34:05.795014 | localhost | ok 2026-04-02 01:34:05.798361 | 2026-04-02 01:34:05.798476 | TASK [upload-logs : Create log directories] 2026-04-02 01:34:06.321251 | localhost | changed 2026-04-02 01:34:06.327449 | 2026-04-02 01:34:06.327614 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-04-02 01:34:06.849569 | localhost -> localhost | ok: Runtime: 0:00:00.009321 2026-04-02 01:34:06.860414 | 2026-04-02 01:34:06.860573 | TASK [upload-logs : Upload logs to log server] 2026-04-02 01:34:07.472912 | localhost | Output suppressed because no_log was given 2026-04-02 01:34:07.477793 | 2026-04-02 01:34:07.478033 | LOOP [upload-logs : Compress console log and json output] 2026-04-02 01:34:07.533688 | localhost | skipping: Conditional result was False 2026-04-02 01:34:07.539841 | localhost | skipping: Conditional result was False 2026-04-02 01:34:07.552066 | 2026-04-02 01:34:07.552346 | LOOP [upload-logs : Upload compressed console log and json output] 2026-04-02 01:34:07.601479 | localhost | skipping: Conditional result was False 2026-04-02 01:34:07.602164 | 2026-04-02 01:34:07.605564 | localhost | skipping: Conditional result was False 2026-04-02 01:34:07.617543 | 2026-04-02 01:34:07.617713 | LOOP [upload-logs : Upload console log and json output]